10,000 Matching Annotations
  1. May 2023
  2. betasite.razorpay.com betasite.razorpay.com
    1. My (Entirely-Unsolicited) Thoughts on the most Casual Crue™'s 10th:

      Ya know, my memory isn't so great anymore, but I remember my then best friend and most respected music authority resting pretty confidently in an argument about this sound, and those who peretuated it: that it was by nature/declaration substanceless, and therefore, those invested in it were either just superficial or.. idek anymore. Ingenuine, maybe?

      I don't think I was even placating with him when I mostly went along with it - it did seem important to invest in more abrasive (read: edgy bs) pursuits. I thought I was resisting "nostalgia" and even invested most of my twenties trying to start an online culture/electronic music magazine in direct editorial opposition to regurgitation. Imo, though, any actual exposure to self-described "vaporwave" makes it very plain how utterly useless it is to cry nostalgia because - crucially - whatever form of it that may or may not be an established marker of this voice is completely devoid of the illness that has absolutely pervaded, destructively, throughout all manner of expression and exploitation in the past 10 years.

      It is just not a constraining or negative force, here. I would propose, even, that it's been made, here, into the most powerful form of critique there possibly could be.

      And the grief!!! and the mania!! In any sort of ... worldly-participatory context, it should not be some gargantuan leap for even the most cynical, repressed, bitter 50-something white music writer army to make the connections, here. Our tears are fucking digital, idiot.

      ...ANYway, sorry. I don't actually have any business talking about music, but I can express my big, soppy as hell appreciation for the nigh-inconceivable amount of quantitive life force for which this milestone is a handy opportunity to reflect.

    1. Reviewer #1 (Public Review):

      The manuscript by Wagstyl et al. describes an extensive analysis of gene expression in the human cerebral cortex and the association with a large variety of maps capturing many of its microscopic and macroscopic properties. The core methodological contribution is the computation of continuous maps of gene expression for >20k genes, which are being shared with the community. The manuscript is a demonstration of several ways in which these maps can be used to relate gene expression with histological features of the human cortex, cytoarchitecture, folding, function, development and disease risk. The main scientific contribution is to provide data and tools to help substantiate the idea of the genetic regulation of multi-scale aspects of the organisation of the human brain. The manuscript is dense, but clearly written and beautifully illustrated.

      # Main comments

      The starting point for the manuscript is the construction of continuous maps of gene expression for most human genes. These maps are based on the microarray data from 6 left human brain hemispheres made available by the Allen Brain Institute. By technological necessity, the microarray data is very sparse: only 1304 samples to map all the cortex after all subjects were combined (a single individual's hemisphere has ~400 samples). Sampling is also inhomogeneous due to the coronal slicing of the tissue. To obtain continuous maps on a mesh, the authors filled the gaps using nearest-neighbour interpolation followed by strong smoothing. This may have two potentially important consequences that the authors may want to discuss further: (a) the intrinsic geometry of the mesh used for smoothing will introduce structure in the expression map, and (b) strong smoothing will produce substantial, spatially heterogeneous, autocorrelations in the signal, which are known to lead to a significant increase in the false positive rate (FPR) in the spin tests they used.

      ## a. Structured smoothing

      A brain surface has intrinsic curvature (Gaussian curvature, which cannot be flattened away without tearing). The size of the neighbourhood around each surface vertex will be determined by this curvature. During surface smoothing, this will make that the weight of each vertex will be also modulated by the local curvature, i.e., by large geometric structures such as poles, fissures and folds. The article by Ciantar et al (2022, https://doi.org/10.1007/s00429-022-02536-4) provides a clear illustration of this effect: even the mapping of a volume of *pure noise* into a brain mesh will produce a pattern over the surface strikingly similar to that obtained by mapping resting state functional data or functional data related to a motor task.

      1. It may be important to make the readers aware of this possible limitation, which is in large part a consequence of the sparsity of the microarray sampling and the necessity to map that to a mesh. This may confound the assessments of reproducibility (results, p4). Reproducibility was assessed by comparing pairs of subgroups split from the total 6. But if the mesh is introducing structure into the data, and if the same mesh was used for both groups, then what's being reproduced could be a combination of signal from the expression data and signal induced by the mesh structure.<br /> 2. It's also possible that mesh-induced structure is responsible in part for the "signal boost" observed when comparing raw expression data and interpolated data (fig S1a). How do you explain the signal boost of the smooth data compared with the raw data otherwise?<br /> 3. How do you explain that despite the difference in absolute value the combined expression maps of genes with and without cortical expression look similar? (fig S1e: in both cases there's high values in the dorsal part of the central sulcus, in the occipital pole, in the temporal pole, and low values in the precuneus and close to the angular gyrus). Could this also reflect mesh-smoothing-induced structure?<br /> 4. Could you provide more information about the way in which the nearest-neighbours were identified (results p4). Were they nearest in Euclidean space? Geodesic? If geodesic, geodesic over the native brain surface? over the spherically deformed brain? (Methods cite Moresi & Mather's Stripy toolbox, which seems to be meant to be used on spheres). If the distance was geodesic over the sphere, could the distortions introduced by mapping (due to brain anatomy) influence the geometry of the expression maps?<br /> 5. Could you provide more information about the smoothing algorithm? Volumetric, geodesic over the native mesh, geodesic over the sphere, averaging of values in neighbouring vertices, cotangent-weighted laplacian smoothing, something else?<br /> 6. Could you provide more information about the method used for computing the gradient of the expression maps (p6)? The gradient and the laplacian operator are related (the laplacian is the divergence of the gradient), which could also be responsible in part for the relationships observed between expression transitions and brain geometry.

      ## b. Potentially inflated FPR for spin tests on autocorrelated data

      Spin tests are extensively used in this work and it would be useful to make the readers aware of their limitations, which may confound some of the results presented. Spin tests aim at establishing if two brain maps are similar by comparing a measure of their similarity over a spherical deformation of the brains against a distribution of similarities obtained by randomly spinning one of the spheres. It is not clear which specific variety of spin test was used, but the original spin test has well known limitations, such as the violation of the assumption of spatial stationarity of the covariance structure (not all positions of the spinning sphere are equivalent, some are contracted, some are expanded), or the treatment of the medial wall (a big hole with no data is introduced when hemispheres are isolated).

      Another important limitation results from the comparison of maps showing autocorrelation. This problem has been extensively described by Markello & Misic (2021). The strong smoothing used to make a continuous map out of just ~1300 samples introduces large, geometry dependent autocorrelations. Indeed, the expression maps presented in the manuscript look similar to those with the highest degree of autocorrelation studied by Markello & Misic (alpha=3). In this case, naive permutations should lead to a false positive rate ~46% when comparing pairs of random maps, and even most sophisticated methods have FPR>10%.

      7. There's currently several researchers working on testing spatial similarity, and the readers would benefit from being made aware of the problem of the spin test and potential solutions. There's also packages providing alternative implementations of spin tests, such as BrainSMASH and BrainSpace (Weinstein et al 2020, https://doi.org/10.1101/2020.09.10.285049), which could be mentioned.<br /> 8. Could it be possible to measure the degree of spatial autocorrelation?<br /> 9. Could you clarify which version of the spin test was used? Does the implementation come from a package or was it coded from scratch?<br /> 10. Cortex and non-cortex vertex-level gene rank predictability maps (fig S1e) are strikingly similar. Would the spin test come up statistically significant? What would be the meaning of that, if the cortical map of genes not expressed in the cortex appeared to be statistically significantly similar to that of genes expressed in the cortex?

    1. I do agree with what sherman had to say about how the relationships on the show The Outs challenged the popular image of the same sex relationships. I feel like in most of hollywood whenever they cast someone to be queer or any non hetero relationship it always so glamorized. Real relationships can be ugly beautiful and sometimes messy, I thought this show did kinda of capture that. sometimes it's just about sex especially when you're dating pool is very shallow. you can wind up in very bad or good places really quickly. Sherman I thought summed up a relationship of this show to another show that mirrored this show quite well,"It's a classic Sex and the City setup: a table, an issue, a difference of opinion. In Sex and the City, the varied experience of sexually liberated."(Sherman 259)This was true for the diner scene the show kind of gives that vibe but it just might be the the setting of new york city, It's just a different way of life there. This show I also though was taking a jab at the current film industry. they show that they can take the normal plot twist that usually happens in movies or primetime shows, but they don't they take the high rode to show that life isn't always that way. I think that this show challenges the assumptions of what queer people actually are like compared to how society sees them. This show portrays and cast these people as they are people who are jsut trying to figure out life like the rest of us and who we find love in comes from all different places and people. If I take this show in it's time it's was produced in 2012. Gay marriage had just been made legal i think recently during the time of this show. So I think that this says a lot. Compared to today where I think the queer community has taken leaps and bounds since then it's a lot more naturalized at least here in california. Maybe not so much in other states. But this show showed no matter you're sexual preference Were just people having people experiences

      .Ng, E. (2013). A “Post-Gay” Era? Media Gaystreaming, Homonormativity, and the Politics of LGBT Integration. Communication, Culture & Critique, 6(2), 258–283. https://doi.org/10.1111/cccr.12013

    1. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #3

      Evidence, reproducibility and clarity

      The manuscript "Single molecule visualization of native centromeric nucleosome formation reveals coordinated deposition by kinetochore proteins and centromere DNA sequence" by Popchock and colleagues describes a new high-throughput single-molecule technique that combines both in vitro and in vivo sample sources. Budding yeast centromeres are genetically defined centromeres, which makes them ideal for studying short DNA segments at the single-molecule level. By flowing in whole cell lysates, Cse4 nucleosomes can form under near physiological conditions. Two analytical experiments were performed: endpoint and time-lapse. In the former case, nucleosomes were allowed to form within 90 minutes and the latter case, nucleosome formation was tracked for up to 45 minutes. In addition, well described genetic mutants were used to assess the stability of Cse4 nucleosomes, as well as different DNA sequences (this we particularly liked- well done). Overall, this is a very interesting technique with potential to be useful for studying any DNA-based effect, ranging from DNA repair to kinetochore assembly. This is strong and impactful work, and the potential this kind of microscopy has for solving kinetic problems in the field. We think it's worthy of publication after revising technical and experimental concerns that would elevate the ms significantly.

      Major comments:

      Statistics are highly recommended for all the data in the ms.

      • At what rate is data collected in the TIRFM setup. For clarity for the reader, it is important to provide imaging details for time-lapse. What is the impact of photobleaching on the stability of the fluorophore signal? Please clarify.
      • The power of single-molecule technique is precisely that such data can be made quantitative. Indeed, the Kaplan-Meyer graphs do show nice quantitative results. Unfortunately, in the text few quantitative measurements are reported. In fact, the Kaplan-Meyer graphs can be interpreted in a quantitative manner such as probability of residency time. Although in most cases the statistical significance between two conditions can be expected, this is not formally calculated and shown. What is the 50% survival time of Cse4 alone or with Ndc10, for instance? This manuscript would greatly benefit from a quantitative approach to the data, including a summary table of the results of the various conditions tested. Please add this important table.
      • This reviewer would expect that the endpoint (90 min) would roughly reflect the occupancy results from time-lapse (45 min) experiments. Based on the data presented in Figures 1, 2, S1-3, this does not appear to be the case. 50% of Cse4-GFP and 70% Ndc10-mCherry colocalized with CEN3 DNA in the endpoint experiment, whereas in Fig 2C, ~30 and ~52 traces were positive for Cse4-GFP and Ndc10-mCherry, resp. with the former having drastically lower residency survival compared to Ndc10-mCherry. If indeed, 50% of Cse4-GFP makes it to the endpoint, about 50% of all traces should reach the end of the 45 minutes time-lapse window. Only about 1/3 of all positive Cse4-GFP traces can be seen at the end of the 45 min window. Could this be due to technical issues of photostability of GFP? Why does the colocalization of Cse4 signal with the DNA signal disappear so readily? Are Cse4 so unstable? Is the same true for canonical H3 nucleosomes? This unlikely true for nucleosomes in cells. Along the same lines, in Suppl Fig 3 there is a disconnect between residency survival and endpoint colocalization on either CEN3, CEN7, or CEN9. What could be the underlying mechanism between the discordance of endpoint results and time-lapse results? Could this be the result of experimental differences?
      • What fraction of particles show colocalization between Cse4-GFP and Ndc10-mCherry? What fraction of occupancy time show colocalization between Cse4-GFP and Ndc10-mCherry? Altogether, understanding the limitation and benefits of endpoint analysis and time-lapse analysis in this particular experimental set-up is important to be able to interpret the results. Please clarify these points.
      • Page 9, third sentence of third paragraph it is stated that the "results suggests that Scm3 helps promote more stable binding of Cse4 ...". This is indeed one possible explanation of the results, and this possibility is tested by overexpressing Psh1 or Scm3 by endpoint colocalization analysis. 1) Taking the concerns regarding the endpoint vs time-lapse results into account, wouldn't it be more accurate to compare either time-lapse results against each other or endpoint results? 2) Alternatively, more stable Cse4 particles are able to recruit Scm3, simply because of the increased binding opportunity of a more stable particle. In this scenario, just the residency time of Cse4 alone is the predicting factor in likelihood to associate with Scm3. To test the latter possibility, Cse4 stability would need to be altered. Please consider this experiment- should be relatively easy with the right mutant of either CSE4 or CDEII (see Luger or Wu papers).
      • In Figure 1C and Supplemental Figure 5B, there appears to be foci that CEN3-ATTO-647 positive, but Cse4-GFP negative and visa verse. It seems logical that there are DNA molecules that didn't reconstitute Cse4 nucleosomes. But how can there be Cse4-GFP positive foci without a positive DNA signal? Is it possible that unlabeled DNA make it onto the flow chamber? If so, can these unlabeled DNA be visualized by Sytox Orange for instance to confirm that no spurious Cse4 deposition occurred? Please clarify.
      • On page 10, at the end of the first paragraph, growth phenotype of pGAL-SCM3 and pGAL-PSH1 mutants were tested. On GAL plates, pGAL-PSH1 shows reduced growth, but not pGAL-SCM3. Wouldn't a more accurate conclusion be that Psh1 is limiting stable centromeric nucleosome formation, instead of Scm3?
      • In the section where DNA was tethered at either one or both ends, an important control is missing. How does such a set-up impact nucleosome formation in general. Can H3 nucleosomes form on random DNA that is either tethered at one or both ends? Does this potentially affect the unwrapping potential/topology of AT-tract DNA? Please comment.

      Minor comments

      • Censored data points are not explained in the text.
      • Number of particles tested should be reported in the main and supplemental figures, not just the legends for those readers who first skim the manuscript before deciding to read it.
      • Typo on page 5, first line: "nucleosom" should be "nucleosome".
      • Typo on page 9, second line: sentence is missing something "... is required for Scm3-dependent ..."
      • It is unclear how the difference in Supplemental figure 5D was calculated.
      • Figure 4C: why are there more Ndc10-mCherry foci observed in double tethered constructs vs single tethered constructs?
      • For the display of individual traces as shown in Fig 2B, 3A, 4E, and 5E, it might be more informative if the z-normalized signal and the binary read-out are shown in separate windows to better appreciate how the z-normalized signal was interpretated.
      • Page 17, fifth line of the second paragraph, it is stated that a conserved feature of centromeres is their AT-richness. This is most certainly true for the majority of species studied thus far, but bovine centromeres for instance are about 54% GC rich. Indeed, Melters et al 2013 Genome Biol showed that in certain clades centromeres can be comprised of GC-rich sequences. It might be worthwhile to nuance this statement.
      • Page 17, last paragraph. Work by Karolin Luger and Carl Wu is cited in relationship to AT-rich DNA being unfavorable for canonical nucleosome deposition. A citation is missing here: Stormberg & Lyubchenko 2022 IJMS 23(19): 11385. Also, the first person to show that AT-tracts affect nucleosome positioning are Andrew Travers and Drew. This landmark work should be cited.
      • Page 18, 9th line from the top, it is stated that yeast centromeres are sensitive to negative genetic drift. This reviewer is of the understanding that genetic drift is a statistical fluctuation of allele frequency, which can result in either gain or loss of specific alleles. Population size is a major factor in the potential power of genetic drift. The smaller a population, the greater the effect. Budding yeast is found large numbers, which would mean that drift only has limited predicted impact. Maybe the authors meant to use the term purifying selection?

      Significance

      This study developed an in vitro imaging technique that allows native proteins from whole cell lysates to associate with a specific DNA sequence that is fixed to a surface. By labeling proteins with specific fluorophore-tags colocalization provides insightful proximity data. By creating mutants, the assembly or disassembly of protein complexes on native or mutated DNAs can therefore be tracked in real time. In a way, this is a huge leap forward from gel shift EMSA assays that have traditionally been used to do comparative kinetics in biochemistry.

      This makes this technique ideal for studying DNA binding complexes, and potentially, even RNA-binding complexes. This study shows both the importance of using genetic mutants, as well as testing the effects of the fixed DNA sequence. As many individual fixed DNA molecules can be tracked at one, it allows for high-throughput analysis, similar to powerful DNA curtain work from Eric Greene's lab. Overall, this is a promising new single-molecule technique that combines in vitro and ex vivo sample sources, and will likely appeal to a broad range of molecular and biophysics scientists.

    1. they

      I think when a bunch of people get together to figure something out, like finding those folks in the proposal, sending them good vibes, and sharing pictures, crowdsourcing can be pretty cool. But, you know, there are those internet trolls who just love to mess things up and invade people's privacy. Personally, I'm not totally sold on crowdsourcing because it's like a 50-50 chance, and that makes me feel a little nervous.

    1. Wikipedia: Is an online encyclopedia whose content is crowdsourced. Anyone can contribute, just go to an unlocked Wikipedia page and press the edit button. Institutions don’t get special permissions (e.g., it was a scandal when US congressional staff edited Wikipedia pages), and the expectation that editors do not have outside institutional support is intended to encourage more people to contribute.

      Digital platforms for crowdsourcing, such as Wikipedia, are astounding innovations of our time. By tapping into the diverse knowledge and abilities of people worldwide, these platforms revolutionize how information is both created and disseminated. But it's not all smooth sailing, with hurdles like ensuring accuracy and addressing potential bias coming to the fore. The case where U.S. congressional staff were found editing Wikipedia pages exemplifies the tightrope walk between openness and the risk of misuse. This intriguing episode serves as a reminder of the ongoing transformation in the way we share information, and the vital role of ethical standards in the realm of digital information.

    2. 16.2.1. Crowdsourcing Platforms# Some online platforms are specifically created for crowdsourcing. For example: Wikipedia: Is an online encyclopedia whose content is crowdsourced. Anyone can contribute, just go to an unlocked Wikipedia page and press the edit button. Institutions don’t get special permissions (e.g., it was a scandal when US congressional staff edited Wikipedia pages), and the expectation that editors do not have outside institutional support is intended to encourage more people to contribute.

      Wikipedia is probably the most notable mention when it comes to crowdsourcing platforms, especially given it's image as a database. However, wikipedia also shows off the negative possibilities with crowdsourcing on a database site, which happens to be misinformation and its ability to be spread easily.

    1. why it is hard for him to be happy in that civilization.

      I think it’s hard for men to be happy in civilization today because there’s so much pressure put on men and men kind of grow up in a different way than women. They’re kind of told not to be emotional to always be strong and kind of like to be robots and people don’t teach men how to be human and just be themselves and it’s OK to has different emotions. Everybody has emotions.

    1. In teacher education, critiques of technology are rarely discussed or illuminated when preparing teachers to teach with technology (Heath & Segal, 2021; Krutka et al., 2019). Failing to question educational technology and the practices surrounding its integration perpetuates an assumption that technology is a neutral tool, often harming the most vulnerable in our schools

      I think that this is a really good point. Technology has become an assumed skill that everyone is expected to know. It's supposed to be "easy" for people to pick up. In my opinion, applications are becoming increasingly more complicated with new updates that constantly change the layout and Terms of Use of the app. With the amount of applications that are sued in school, it seems impossible to stay up to date on these features. This can be very harmful and cause teachers and students to fall behind and struggle with completing assignments.

    1. Large Language Models are interfaces to knowledge graphs

      You might want to change the wording.

      In their raw form, LLMs are very much not KGs, just abstract vectors representing text content. In fact, many of their failure modes come from not having ground truth factual data, like what you find in a KG.

      But it's not hard to integrate an LLM with a KG, that makes both tools vastly more powerful, and that is obviously what's coming next.

    2. The Platform matches different kinds of users like advertisers to customers, law enforcement agencies to suspects, etc. in order to maximize the overall value of all Platforms

      It's more granular than this, and that's significant. It's not just pairing advertisers with customers, it's connecting particular ad verticals, formats, and inventories with customers with specific demographics, classifications, and interests.

      That means potentially sensitive information about end users gets shared (anonymously, in theory) with advertisers who get to leverage that knowledge to manipulate them. It also means advertisers compete for access, and iterate with Platforms on how to most efficiently monetize the end users.

      And, of course, the same is true of law enforcement, etc.

    3. it is their “graph plus compute” structure

      Glad to see you make explicit what you think is wrong here. This is an interesting framing. It's not what I would have said, but I think I like it.

      Why is this bad? Just thinking out loud: by bundling the computation, you remove direct access to the data. By making the computation a black box, you hide its bias and side-effects. You can also hide a vast pipeline with many inputs and outputs by presenting a much simpler subset API to the public, as if that were the whole thing. This lets you hide your profit model and the asymmetry in how much value the user / company gets out of the product.

    4. So if you are a data broker, and you just made a hostile acquisition of another data broker who has additional surveillance information to fill the profiles of the people in your existing dataset, you can just stitch those new properties on like a fifth arm on your nightmarish data Frankenstein

      Love this point. It might also be worth mentioning that it's totally possible (and common, I assume) to join seemingly safe and innocuous KGs with private data / surveillance KGs. This works really well, and some of the mundane data might end up being the missing link that binds together the more controversial stuff and makes it useful.

    5. The mutation from “Linked Open Data” [16] to “Knowledge Graphs” is a shift in meaning from a public and densely linked web of information from many sources to a proprietary information store used to power derivative platforms and services.

      This is true, but it's one sided and missing something very important.

      Having powerful, well funded groups make products out of structured data also had several large, positive effects. The schemas became more practical for addressing the needs of real people. The volume, quality, and consistency of structured data on the web went up. The accessibility and usage of structured data went way up. Many useful products got built that probably never would have emerged without the corporate efforts.

      You do a good job of highlighting many down sides to this, which are also true. But if you don't bring up the good sides, people will accuse you of cherry picking. If you tell both sides, they'll argue why the benefits outweigh the drawbacks.

      So, I think it's worth disentangling the threads of not just what they did but how they did it. Funding products and product-oriented data curation was positive. But they had ulterior motives, which is why they designed the systems to be closed, proprietary, and unaccountable. It didn't have to be like that. Could we do it differently, avoid most of the harms, and still reap most of the benefits?

      I like your vulgar data vision, but it doesn't address this key point. How do we get high quality useful tools without handing everything over to a rich corporation? We do need an effective model for organizing and funding the vulgar efforts so they can compete.

    6. On platforms, rather than a system that “belongs” to everyone, you are granted access to some specific set of operations through an interface so that you can be part of a social process of producing and curating information for the platform holder.

      This is an interesting perspective. Web 2.0 has two sides. To users, they were reaping the benefits of the platforms to get things done. To platforms, this activity served as a source of data, validation, and ranking.

      Usually you hear just perspective one. It's rare to see just perspective two, like you wrote here. To me, they're two sides of the same coin, and we should always emphasize both.

      That said, there could be an interesting discussion about whether it's a "fair coin" given the power / value asymmetry here.

    7. It imagined the use of triplet links and shared ontologies at a protocol level as a way of organizing the information on the web into a richly explorable space: rather than needing to rely on a search bar, one could traverse a structured graph of information [16, 17] to find what one needed without mediation by a third party.

      This strikes me as a little strange. I'm not familiar with the Semantic Web project specifically, tho, so it may be true.

      It's just that a decentralized KG with custom schemas is not so different from a web of documents tied together with links. You still need some sort of third-party aggregator to find the right content. The big difference is you could go to a KG hub (like Freebase) instead of a Search engine and work with structured data instead of raw text. But this was still a third party.

    1. But of course, the flipside to this is that we lose much of the sense of the “commons” that has characterised the Internet so far. You lose a lot of the serendipity that comes from logging on and suddenly talking to someone in another country, who maybe shares an interest in adventure games with you but is otherwise quite different. And once people are in smaller groups, then in-group norms can shift and become more accentuated from each other. If these are norms that seem kind of harmless then this is called a filter bubble and journalists wring their hands in The Atlantic about how it’s happening to them. And if these are norms that seem kind of racialised or scary then it’s called radicalisation, and journalists wring their hands in The Atlantic about how it’s happening to other people.

      Lol, lmao. It is truly right and just, our duty and our salvation, always and everywhere to clown on The Atlantic. (With only a slight side of term usage pet peeve.)

      But also: perhaps it is fair to say that the pre-internet didn't feel like such a global commons, but an archipelago of local commons. How many hits did GameFAQs get, long ago? It was small fora first, and only then "platforming", and now perhaps a return to older audience sizes?

      Even the global resources on the early internet look cliquey and nichey in retrospect because there were so few people using the web; just being an internet user used to carry a lot of signal, tell you a lot about a person, situate them within a club.

    1. As soon as Gregor was alone, he began to feel ill. Turning around was an effort. Evenbreathing was an effort. A thin stream of blood trickled from his flank down his fuzzybelly. He wanted to crawl away from it, but there was no place to go. He lay still on thespot where he had come to rest just in order to get his breath back and to stop thebleeding. “I’m in a bad way,” said Gregor. It had never occurred to him before that hecould really become ill. He had seen sick animals—a dove once in a while, which hadfallen out of the nestling into the gutter and could not fly any more, or the weak infantsof the woman next door who had to be picked up with the tongs and thrown into thedustbin, or the bugs his father used to bring to him when he was still a young boy andwhich he had liked so much

      This is showcasing the belief that many of us share in that we are not prepared for bad things to happen to us. We see others experience bad things and may feel sorrow towards their situation, but we don't really know what it's like until it happens to us, and it is a truly shocking thing to happen when it does.

    Annotators

    1. Extended numbering and why use Outline of Disciplines at all? .t3_13eyg8p._2FCtq-QzlfuN-SwVMUZMM3 { --postTitle-VisitedLinkColor: #9b9b9b; --postTitleLink-VisitedLinkColor: #9b9b9b; --postBodyLink-VisitedLinkColor: #989898; } Several things:Why are there different listings for the Academic Outline of Disciplines? Some starts the top level with Humanities and other start with Arts which changes the numbering?I am createing an Antinet for all things. Some of the levels of the AOOD has more then 9 items so Scott's 4 digit system would not work. For some levels I would have to use two digits. Thoughts?Why even use said system? Why is it a bad reason to just start with #1 that indicates the first subject sequence, #2 for a different subject etc..?

      reply to u/drogers8 at https://www.reddit.com/r/antinet/comments/13eyg8p/extended_numbering_and_why_use_outline_of/

      Based on my research, Scott Scheper was the one of the original source for people adopting the Academic Outline of Disciplines. I've heard him say before that he recommends it only as a potential starting place for people who are new to the space and need it as a crutch to get going. It's an odd suggestion as almost all of the rest of his system is so Luhmann-based. I suspect it's a quirk of how he personally started and once moving it was easier than starting over. He also used his own ZK for showing others, and it's hard to say one thing in a teaching video when showing people something else. Ultimately it's hard to mess up on numbering choices unless you're insistent on using only whole numbers or natural numbers. I generally wouldn't suggest complex numbers either, but you might find some interesting things within your system if you did. More detail: https://boffosocko.com/2022/10/27/thoughts-on-zettelkasten-numbering-systems/ The only reason to have any standardized base or standardized numbers would be if you were attempting to have a large shared ZK with others. If this is your intent, then perhaps look at the Universal Decimal Classification, though a variety of things might also work including Dewey Decimal.

    1. They get points here and there. I think it incentivizes them more than if you had known points. It's one of those things. Extrinsic motivation. Intrinsic motivation. It's a great debate to have but. Maybe having a little bit a little bit not 50% of their grade, but five. Something really small, I think is enough to get them to pay attention. Oh, this is part of the. Learning process as opposed to thinking. Okay. What is the grade? Oh, it's 50%. This okay. That's all Jeremy D.I need to pay attention to. Right. Start paying attention right before the Viranga P.midterm, because Nothing, right? Yeah. That's the downside. If you make the whole course grade because people will advocate for that, people will say, okay, the grade should be based on demonstrating learning, not activities, right. There's a school of thought that way. Okay. So if you do that, then midterms and the final is the whole grade. Essentially, maybe homework. Then that's when they are going to pay attention to it. Right. So they might pay attention when the homework is you. But they're largely going to be disengaged till midterm one shows up and go, oh, I have to study from interim one, but by that time. It's not one of those classes. You can't do that in physics. You can't just do it three Jeremy D.days Viranga P.before. I've Jeremy D.been there Viranga P.thinking I could do that. Jeremy D.Yeah. You can't do it with. I should have been Viranga P.going to lecture. Yeah. So I think it's one of those things where it's a tricky balance. But to say, look. You have to show up and if it helps here's 5% to help you come

      Why assign formative assessments/activities ahead of midterm, etc.

    2. Spend half an hour. Just focus half an hour and do this and you'll get what. I have talked to them about the fact that I'm not expecting them to read every word. Of the assigned reading. Glance through it. Get an idea, right? Just get an idea and ask a question. Start the conversation so that. You're not showing up to class without any understanding of what this is about and expecting in 50 minutes or something like that. To gather a lot of information. It's not very effective to do that. It's a process

      More on "exposure before class"

    3. It's way too sophisticated for its own good. Maybe. Right. It's trying to be AI ish in the sense like it's trying to detect. If a particular comment is worthy of two points or three points, and a lot of that system is based on that. So if student makes a comment, it marks it as one. Instead of two. And you get a lot of emails about why was this Mark. And that's not the point. I Jeremy D.think Viranga P.the point of social. Is you're getting them to just have conversations. Encouraging conversations. Not necessarily to judge if that comment was good or Jeremy D.bad. Viranga P.It's just get it done. And we expect the fact that you're in the room having a conversation will help you realize, oh, this is useful. When I have a question, I can ask it here, and somebody else may have the same question. And we can have a discussion around it. And that social part. It's Social constructivism. Is helpful. Right. So people realize that they learn from other people.

      Critique of Perusall as about right or wrong versus the social construction of knowledge.

    1. Reddit is valued at more than ten billion dollars, yet it is extremely dependent on mods who work for absolutely nothing. Should they be paid, and does this lead to power-tripping mods?

      I think mods should be paid at least a small amount. While it is true that most mods volunteer their time to moderate it is because they are passionate about the subject of the subreddit which they are moderating, but just because you enjoy the work you do doesn’t mean you shouldn’t be paid for it. Also, it’s not like Reddit needs to pay them a huge salary or anything; even just a small fee for moderators would be better than nothing

    1. Moderation Tools

      That's the reason we are seeing a growing fascination with how AI is revolutionizing content moderation. It's not just making the process smoother. It's also ramping up the precision in weeding out content that does not fit the bill. It's always on duty, ensuring businesses can trust it to keep their digital platforms clean and secure at any hour of the day.

    1. Because it is a bureaucracy, and bureaucracies leak power. It’s like asking why a two-stroke engine burns oil—or at least why a diesel engine puffs out soot.

      I agree with most of what I've read so far, but hate these weird and needless intermissions. The way it's formatted: question -> "because it does" -> what seems to be a criticism of the question (although that's not inherently true), makes me feel pretty sus about why it's included especially when its utility in the context of the article isn't immediately apparent. There's definitely a possibility that it's not intended to psyop. Either way, I just want the facts and I can take it from there about what to conclude.

    2. The cathedral can’t be repaired for two reasons. The first is that it can’t be repaired—just look at it.

      I really hate it when articles do this bologna. "It is because it is" - why include it? Statements of "it's obvious" have a goal just like every other statement.

    1. The problem here is that in real life only a minority of rapes follow the script of the Idealized Rape.

      In this generation, we have gotten used to the idea that SA can happen without fitting into the idealized rape genre. With that being said, like I mentioned before It causes people to be more likely to defend the perpetrator. To those people, un-ideal rape can seem like a grey area, more acceptable to defend. For example, someone could say "Well Stacy got super drunk and she was with greg, I heard greg was being touchy and she didn't like it, but I mean it's not like he raped her or anything, he was just being forward." When in reality, the factual story I have created as an example would be worded like this. "Stacy got drunk at a party with her friends, and started flirting with greg, she just wanted to flirt and chat. They were being a bit touchy, until greg began touching her more forwardly. Stacy kept moving his hand away, telling him to keep things PG. Greg listened for about 5 minutes then continued. Stacy said no, but he did not listen. She started to freak out, and became speechless until a friend came and grabbed her noticing her obvious discomfort. It is very easy for apologists to come up with excuses for this type of SA/SH.

    1. Reviewer #1 (Public Review):

      The authors introduce a computational model that simulates the dendrites of developing neurons in a 2D plane, subject to constraints inspired by known biological mechanisms such as diffusing trophic factors, trafficked resources, and an activity-dependent pruning rule. The resulting arbors are analyzed in terms of their structure, dynamics, and responses to certain manipulations. The authors conclude that 1) their model recapitulates a stereotyped timecourse of neuronal development: outgrowth, overshoot, and pruning 2) Neurons achieve near-optimal wiring lengths, and Such models can be useful to test proposed biological mechanisms- for example, to ask whether a given set of growth rules can explain a given observed phenomenon - as developmental neuroscientists are working to understand the factors that give rise to the intricate structures and functions of the many cell types of our nervous system.

      Overall, my reaction to this work is that this is just one instantiation of many models that the author could have built, given their stated goals. Would other models behave similarly? This question is not well explored, and as a result, claims about interpreting these models and using them to make experimental predictions should be taken warily. I give more detailed and specific comments below.

      Line 109. After reading the rest of the manuscript, I worry about the conclusion voiced here, which implies that the model will extrapolate well to manipulations of all the model components. How were the values of model parameters selected? The text implies that these were selected to be biologically plausible, but many seem far off. The density of potential synapses, for example, seems very low in the simulations compared to the density of axons/boutons in the cortex; what constitutes a potential synapse? The perfect correlations between synapses in the activity groups is flawed, even for synapses belonging to the same presynaptic cell. The density of postsynaptic cells is also orders of magnitude of, etc. Ideally, every claim made about the model's output should be supported by a parameter sensitivity study. The authors performed few explorations of parameter sensitivity and many of the choices made seem ad hoc.

      Many potentially important phenomena seem to be excluded. I realize that no model can be complete, but the choice of which phenomena to include or exclude from this model could bias studies that make use of it and is worth serious discussion. The development of axons is concurrent with dendrite outgrowth, is highly dynamic, and perhaps better understood mechanistically. In this model, the inputs are essentially static. Growing dendrites acquire and lose growth cones that are associated with rapid extension, but these do not seem to be modeled. Postsynaptic firing does not appear to be modeled, which may be critical to activity-dependent plasticity. For example, changes in firing are a potential explanation for the global changes in dendritic pruning that occur following the outgrowth phase.

      Line 167. There are many ways to include activity -independent and -dependent components into a model and not every such model shows stability. A key feature seems to be that larger arbors result in reduced growth and/or increased retraction, but this could be achieved in many ways (whether activity dependent or not). It's not clear that this result is due to the combination of activity-dependent and independent components in the model, or conceptually why that should be the case.

      Line 183. The explanation of overshoot in terms of the different timescales of synaptic additions versus activity-dependent retractions was not something I had previously encountered and is an interesting proposal. Have these timescales been measured experimentally? To what extent is this a result of fine-tuning of simulation parameters?

      Line 203. This result seems at odds with results that show only a very weak bias in the tuning distribution of inputs to strongly tuned cortical neurons (e.g. work by Arthur Konnerth's group). This discrepancy should be discussed.

      Line 268. How does the large variability in the size of the simulated arbors relate to the relatively consistent size of arbors of cortical cells of a given cell type? This variability suggests to me that these simulations could be sensitive to small changes in parameters (e.g. to the density or layout of presynapses).

      The modeling of dendrites as two-dimensional will likely limit the usefulness of this model. Many phenomena- such as diffusion, random walks, topological properties, etc - fundamentally differ between two and three dimensions.

      The description of wiring lengths as 'approximately optimal' in this text is problematic. The plotted data show that the wiring lengths are several deviations away from optimal, and the random model is not a valid instantiation of the 2D non-overlapping constraints the authors imposed. A more appropriate null should be considered.

      It's not clear to me what the authors are trying to convey by repeatedly labeling this model as 'mechanistic'. The mechanisms implemented in the model are inspired by biological phenomena, but the implementations have little resemblance to the underlying biophysical mechanisms. Overall my impression is that this is a phenomenological model intended to show under what conditions particular patterns are possible. Line 363, describing another model as computational but not mechanistic, was especially unclear to me in this context.

    1. “The Law” itself changed in this revolution. America stopped using laws as a code of conduct and custom, and instead started using them as devices to satisfy the preferences of managerial elite. Traditional formulas (rule of law, just & uniform relationships among citizens) undermined the power of the new elite, which is why they showed no regard for them, and in fact degraded them. “Law is concerned with rights.  Administration is concerned with results.”

      I'm curious about what specifically they're referring to and where that quote comes from. If it's from the book, it has no real meaning. If it's a reference to a quote from a person in a position of "the elite" then that's at least potentially a different story but the meaning would still need to be evidenced. Otherwise, it take a giant leap of assumption for the reader to go from one contextless quote to a conclusion.

    1. If [social media] was just bad, I’d just tell all the kids to throw their phone in the ocean, and it’d be really easy. The problem is it - we are hyper-connected, and we’re lonely. We’re overstimulated, and we’re numb. We’re expressing our self, and we’re objectifying ourselves. So I think it just sort of widens and deepens the experiences of what kids are going through

      I really like this statement from Bo Burnham, it's humorous but true. I agree that social media can be seen as a huge bubble, encompassing all sorts of emotions and influences. Whether these are positive or negative depends entirely on how individuals choose to use and interact with the platform.

    2. “If [social media] was just bad, I’d just tell all the kids to throw their phone in the ocean, and it’d be really easy. The problem is it - we are hyper-connected, and we’re lonely. We’re overstimulated, and we’re numb. We’re expressing our self, and we’re objectifying ourselves. So I think it just sort of widens and deepens the experiences of what kids are going through. But in regards to social anxiety, social anxiety - there’s a part of social anxiety I think that feels like you’re a little bit disassociated from yourself. And it’s sort of like you’re in a situation, but you’re also floating above yourself, watching yourself in that situation, judging it. And social media literally is that. You know, it forces kids to not just live their experience but be nostalgic for their experience while they’re living it, watch people watch them, watch people watch them watch them. My sort of impulse is like when the 13 year olds of today grow up to be social scientists, I’ll be very curious to hear what they have to say about it. But until then, it just feels like we just need to gather the data.”

      I completely agree with this. I think with social media there is a disconnect from the online you and the in person you. You can be as selective as you want with your online identity in a way that you can't do so much in person. Even though we all pretty much do this online, it can still feel very real. There is constant comparison with our online selves and others on live selves to our real life self which can create a lot of isolation and self-worth issues. I think we are starting to see the negative impacts of social media really emerging. I am starting to also see some more push back online and a sort of exhaustion when it comes to things like filters. I hope there will be a greater movement against this sort of stuff.

    3. If [social media] was just bad, I’d just tell all the kids to throw their phone in the ocean, and it’d be really easy. The problem is it - we are hyper-connected, and we’re lonely. We’re overstimulated, and we’re numb. We’re expressing our self, and we’re objectifying ourselves. So I think it just sort of widens and deepens the experiences of what kids are going through.

      It's true that social media has both positive and negative effects on our lives. While it can connect us with people around the world, it can also create a sense of loneliness and overstimulation. The paragraph highlights the fact that social media can intensify the struggles that kids already face. It's important to recognize these challenges and work towards finding a balance in our use of technology.

    1. it depressing how many young lesbians now feel that, because they donot perform or feel invested in conventional femininity, they can no longerbe women. And so they shift from identifying as lesbian women to straightmen. Compulsory heterosexuality all over again.”

      This was a really interesting quote to read. I can not personally speak to this experience and I have never heard this thought process, so it's really eye opening to see just how harmful lesbian stereotyping can be.

    1. Facebook has a suicide detection algorithm, where they try to intervene if they think a user is suicidal (Inside Facebook’s suicide algorithm: Here’s how the company uses artificial intelligence to predict your mental state from your posts). As social media companies have tried to detect talk of suicide and sometimes remove content that mentions it, users have found ways of getting around this by inventing new word uses, like “unalive.”

      I find this very interesting. I suspected that there might be some way for social media to control what is being posted so that their platform is safer. However, people will always find ways around it, like the "unalive" example or just changing the letters to a special character that resembles it. Personally, I don't think this helps... since even throughout the day, there are negative things that we see and this isn't really any exception. I can't speak for people experiencing this, but I feel like these things don't really affect me but I have seen these algorithms at work before, I just don't think it's effective yet.

    1. I may not understand this right, but if I'm creating an original idea, who am I citing for? Maybe it's just something I never understood properly how that worked and that's the issue. Part of the reason I was asking if anyone knew of actual academic research that had been published while using a ZK was so I could see how it worked, in action.

      reply to u/ruthlessreuben at https://www.reddit.com/r/Zettelkasten/comments/w5yz0n/comment/ihxojq0/?utm_source=reddit&utm_medium=web2x&context=3

      u/ruthlessreuben, as a historian, you're likely to appreciate some variations which aren't as Luhmann-centric. Try some of the following:

      The following note taking manuals (or which cover it in part) all bear close similarities to Luhmann's system, but were written by historians and related to the ideas of "historical method":

      Although she's a sociologist, you might also appreciate Beatrice Webb's coverage which also has some early database collection flavor:

      Webb, Sidney, and Beatrice Webb. Methods of Social Study. London; New York: Longmans, Green & Co., 1932. http://archive.org/details/b31357891.

    1. It’s almost like he’s saying compared to Rome and old renaissance painters who captured a view point from which no one had ever seen ,making it grander , New York’s view points from tall building onto city’s isn’t all that its made to be because everyone can look from that view point (down onto other) and that this belief that you are higher then others is just another sale. Or just another lie to make you feel good about yourself. “ ITSD HARD TO BE DOWN WHEN YOUR UP” said a poster in a building

    1. And then the other piece of it is just thinking about the way kids and teens develop. Generally, they don’t become really interested in big, global issues like that until late high school and college. And where are the links between technology use and depression strongest? The youngest. Where do you see the largest increases in depression, self-harm, and suicide? It’s 10- to 14-year-olds. In fact, it’s 10- to 12-year-olds when you really boil it down. That’s not usually going to be the group who is really dialed into world issues. What they’re concerned about is what their friends are doing.

      This seems valid and resonable

    2. I’ve had debates with friends where they advanced the notion that the world’s degraded state — climate change, regular school shootings, political strife — might be a primary reason younger generations are so miserable. Or, to go with an angle I find more plausible: The news isn’t necessarily worse, but the internet, with its inherent negativity bias, spins things as bleaker than ever.I think that’s exactly it. In Generations, I spent a lot of time on this, because it was a theme that just came up over and over and over — this really pervasive negativity, sometimes crossing over into denialism, especially online. And I think you have to take a step back from that and ask the question: Is 2023 really worse than boomers getting drafted into Vietnam? And I’ll keep going. Is it really worse than the ’80s when we thought the USSR was going to drop the bomb any second and the world was going to end? Is it really worse than millennials graduating into the Great Recession? To be fair, the late ’90s, when I was coming of age, was pretty untroubled in a world-on-fire sense.There are times that are better and worse, but every time has its challenges. And are the challenges we face right now really worse than the challenges of previous eras? I think that’s an extremely subjective question.

      Yeah, it is a subjective question. Part of the subjectiveness: values make a difference. If no one gives a shit about climate change and species extinction, then I bet everything looks rosier. So it's easier to blame people for caring

    1. I'm not actually setting a productivity goal, I'm just tracking metadata because it's related to my research. Of which the ZettelKasten is one subject.That being said, in your other post you point to "Quality over Quantity" what, in your opinion, is a quality note?Size? Number of Links? Subjective "goodness"?

      reply to u/jordynfly at https://www.reddit.com/r/Zettelkasten/comments/13b0b5c/comment/jjcu3cn/?utm_source=reddit&utm_medium=web2x&context=3

      I'm curious what your area of research is? What are you studying with respect to Zettelkasten?

      Caveat notetarius. Note collections are highly idiosyncratic to the user or intended audience, thus quality will vary dramatically on the creator's needs and future desires and potential uses. Contemporaneous, very simple notes can be valuable for their initial sensemaking and quite often in actual practice stop there.

      Ultimately, only the user can determine perceived quality and long term value for themselves. Future generations of historians, anthropologists, scholars, and readers, might also find value in notes and note collections, but it seems rare that the initial creators have written them with future readers and audiences in mind. Often they're less useful as the external reader is missing large swaths of context.

      For my own personal notes, I consider high quality notes to be well-sourced, highly reusable, easily findable, and reasonably tagged/linked. My favorite, highest quality notes are those that are new ideas which stem from the combination of two high quality notes. With respect to subjectivity, some of my philosophy is summarized by one of my favorite meta-zettels (alt text also available) from zettelmeister Umberto Eco.

      Anecdotally, 95% of my notes are done digitally and in public, but I've only got scant personal evidence that anyone is reading or interacting with them. I never write them with any perceived public consumption in mind (beyond the readers of the finished pieces that ultimately make use of them), but it is often very useful to get comments and reactions to them. I'm only aware of a small handful of people publishing their otherwise personal note collections (usually subsets) to the web (outside of social media presences which generally have a different function and intent).

      Intellectual historians have looked at and documented external use cases of shared note collections, commonplace books, annotated volumes, and even diaries. There are even examples of published (usually posthumously) commonplace books, waste books, etc., but these are often for influential public and intellectual figures. Here Ludwig Wittgenstein's Zettel, Walter Benjamin's Arcades Project, Vladimir Nabokov's The Original of Laura, Roland Barthes' Mourning Diary, Georg Christoph Lichtenberg's Waste Books, Ralph Waldo Emmerson, Ronald Reagan's card index commonplace, Stobaeus' Anthology, W. H. Auden's A Certain World, and Robert Southey’s Common-Place Book come quickly to mind not to mention digitized scholarly collections of Niklas Luhmann, W. Ross Ashby, S.D. Goitein, Jonathan Edwards' Miscellanies, and Aby Warburg's notes. Some of these latter will give you an idea of what they may have thought quality notes to have been for them, but often they mean little if nothing to the unstudied reader because they lack broader context or indication of linkages.

    1. You are more important than my personal preferences.

      I think this idea needs to be revolutionalized and be spread. While worksheets, standardized testing, same narrow curriculum might make things efficient, it makes some students settle down lower then where they are and get bored, and makes other students feel as if they do not fit and are delayed. While it's so hard to change a whole system, providing options for representation and expression and engagement, even in little increments, will make students feel that they aren't just a number who needs to reach an educational milestone at one particular date.

    1. Going beyond accommodations involves designing for flexibility, choice, and empowerment.

      This is a great point and one that is easily forgotten. It's a common saying and it's true, that when we plan for neuro-divergent learners, we are not just benefitting those students but all of the students in the classroom. I think giving students options for the kind of work they complete, and how they complete it, can result in increased engagement, learning, and satisfaction in students.

    1. There is no advantage in imagining some super-vector that has the γi vectors as its components.

      I remember wanting to do that, when I learned about this style of vector component notation originally. I didn't find any advantage of thinking about it that way, but also, it's not obvious to me that there isn't one that I just haven't found.

    1. Circling back around to this after a mention by Tim Bushell at Dan Allosso's Book Club this morning. Nicole van der Hoeven has been using it for a while now and has several videos.

      Though called Napkin, which conjures the idea of (wastebook) notes scribbled on a napkin, is a card-based UI which has both manual and AI generated tags in a constellation-like UI. It allows creating "stacks" of notes which are savable and archivable in an outline-esque form (though the outline doesn't appear collapsible) as a means of composition.

      It's got a lot of web clipper tooling for saving and some dovetails for bringing in material from Readwise, but doesn't have great data export (JSON, CSV) at the moment. (Not great here means that one probably needs to do some reasonably heavy lifting to do the back and forth with other tools and may require programming skills.)

      At present, it looks like just another tool in the space but could be richer with better data dovetailing with other services.

    1. I try mybest to operate on a golden rule of respect and kindness for all living things

      agreed, It's not that hard to be a kind person, and that sticks with people. Those are the types of people I enjoy hanging out with, and just being respectful with people goes a long way.

    1. To show that Villin is required for proper morphogenesis of Islet+ cells in the papilla, we performed tissue-specific CRISPR knockout using a combination of three validated sgRNAs spanning most of the coding sequence (Supplemental Figure 3E).

      It's really impressive that this kind of tissue-specific knockout in just a few cells works well enough to see a measurable effect. I imagine the knockout efficiency must be very high!

    1. Reviewer #3 (Public Review):

      This manuscript describes interesting experiments on how information from the two eyes is combined in cortical areas, sub-cortical areas, and perception. The experimental techniques are strong and the results are potentially quite interesting. But the manuscript is poorly written and tries to do too much in too little space. I had a lot of difficulty understanding the various experimental conditions, the complicated results, and the interpretations of those results. I think this is an interesting and useful project so I hope the authors will put in the time to revise the manuscript so that regular readers like myself can better understand what it all means.

      Now for my concerns and suggestions:

      The experimental conditions are novel and complicated, so readers will not readily grasp what the various conditions are and why they were chosen. For example, in one condition different flicker frequencies were presented to the two eyes (2Hz to one and 1.6Hz to the other) with the flicker amplitude fixed in the eye presented to the lower frequency and the flicker amplitude varied in the eye presented to the higher frequency. This is just one of several conditions that the reader has to understand in order to follow the experimental design. I have a few suggestions to make it easier to follow. First, create a figure showing graphically the various conditions. Second, come up with better names for the various conditions and use those names in clear labels in the data figures and in the appropriate captions. Third, combine the specific methods and results sections for each experiment so that one will have just gone through the relevant methods before moving forward into the results. The authors can keep a general methods section separate, but only for the methods that are general to the whole set of experiments.

      I wondered why the authors chose the temporal frequencies they did. Barrionuevo et al (2014) showed that the human pupil response is greatest at 1Hz and is nearly a log unit lower at 2Hz (i.e., the change in diameter is nearly a log unit lower; the change in area is nearly 2 log units lower). So why did the authors choose 2Hz for their primary frequency? And why did the authors choose 1.6Hz which is quite close to 2Hz for their off frequency? The rationale behind these important decisions should be made explicit.

      By the way, I wondered if we know what happens when you present the same flicker frequencies to the two eyes but in counter-phase. The average luminance seen binocularly would always be the same, so if the pupil system is linear, there should be no pupil response to this stimulus. An experiment like this has been done by Flitcroft et al (1992) on accommodation where the two eyes are presented stimuli moving oppositely in optical distance and indeed there was no accommodative response, which strongly suggests linearity.

      Figures 1 and 2 are important figures because they show the pupil and EEG results, respectively. But it's really hard to get your head around what's being shown in the lower row of each figure. The labeling for the conditions is one problem. You have to remember how "binocular" in panel c differs from "binocular cross" in panel d. And how "monocular" in panel d is different than "monocular 1.6Hz" in panel e. Additionally, the colors of the data symbols are not very distinct so it makes it hard to determine which one is which condition. These results are interesting. But they are difficult to digest.

      The authors make a strong claim that they have found substantial differences in binocular interaction between cortical and sub-cortical circuits. But when I look at Figures 1 and 2, which are meant to convey this conclusion, I'm struck by how similar the results are. If the authors want to continue to make their claim, they need to spend more time making the case.

      Figure 5 is thankfully easy to understand and shows a very clear result. These perceptual results deviate dramatically from the essentially winner-take-all results for spatial sinewaves shown by Legge & Rubin (1981); whom they should cite by the way. Thus, very interestingly the binocular combination of temporal variation is quite different than the binocular combination of spatial variation. Can the pupil and EEG results also be plotted in the fashion of Figure 5? You'd pick a criterion pupil (or EEG) change and use it to make such plots.

      My main suggestion is that the authors need to devote more space to explaining what they've done, what they've found, and how they interpret the data. I suggest therefore that they drop the computational model altogether so that they can concentrate on the experiments. The model could be presented in a future paper.

    1. Reviewer #1 (Public Review):

      People can perform a wide variety of different tasks, and a long-standing question in cognitive neuroscience is how the properties of different tasks are represented in the brain. The authors develop an interesting task that mixes two different sources of difficulty, and find that the brain appears to represent this mixture on a continuum, in the prefrontal areas involved in resolving task difficulty. While these results are interesting and in several ways compelling, they overlap with previous findings and rely on novel statistical analyses that may require further validation.

      Strengths<br /> 1. The authors present an interesting and novel task for combining the contributions of stimulus-stimulus and stimulus-response conflict. While this mixture has been measured in the multi-source interference task (MSIT), this task provides a more graded mixture between these two sources of difficulty

      2. The authors do a good job triangulating regions that encoding conflict similarity, looking for the conjunction across several different measures of conflict encoding

      3. The authors quantify several salient alternative hypothesis and systematically distinguish their core results from these alternatives

      4. The question that the authors tackle is of central theoretical importance to cognitive control, and they make an interesting an interesting contribution to this question

      Concerns<br /> 1. It's not entirely clear what the current task can measure that is not known from the MSIT, such as the additive influence of conflict sources in Fu et al. (2022), Science. More could be done to distinguish the benefits of this task from MSIT.

      2. The evidence from this previous work for mixtures between different conflict sources make the framing of 'infinite possible types of conflict' feel like a strawman. The authors cite classic work (e.g., Kornblum et al., 1990) that develops a typology for conflict which is far from infinite, and I think few people would argue that every possible source of difficulty will have to be learned separately. Such an issue is addressed in theories like 'Expected Value of Control', where optimization of control policies can address unique combinations of task demands.

      3. Wouldn't a region that represented each conflict source separately still show the same pattern of results? The degree of Stroop vs Simon conflict is perfectly negatively correlated across conditions, so wouldn't a region that *just* tracks Stoop conflict show these RSA patterns? The authors show that overall congruency is not represented in DLPFC (which is surprising), but they don't break it down by whether this is due to Stroop or Simon congruency (I'm not sure their task allows for this).

      4. The authors use a novel form of RSA that concatenates patterns across conditions, runs and subjects into a giant RSA matrix, which is then used for linear mixed effects analysis. This appears to be necessary because conflict type and visual orientation are perfectly confounded within the subject (although, if I understand, the conflict type x congruence interaction wouldn't have the same concern about visual confounds, which shouldn't depend on congruence). This is an interesting approach but should be better justified, preferably with simulations validating the sensitivity and specificity of this method and comparing it to more standard methods.

      A chief concern is that the same pattern contributes to many entries in the DV, which has been addressed in previous work using row-wise and column-wise random effects (Chen et al., 2017, Neuroimage). It would also be informative to know whether the results hold up to removing within-run similarity, which can bias similarity measures (Walther et al., 2016, Neuroimage).

      Another concern is the extent to which across-subject similarity will only capture consistent patterns across people, making this analysis very similar to a traditional univariate analysis (and unlike the traditional use of RSA to capture subject-specific patterns).

      5. Finally, the authors should confirm all their results are robust to less liberal methods of multiplicity correction. For univariate analysis, they should report the effects from the standard p < .001 cluster forming threshold for univariate analysis (or TFCE). For multivariate analyses, FDR can be quite liberal. The authors should consider whether their mixed-effects analyses allow for group-level randomization, and consider (relatively powerful) Max-Stat randomization tests (Nichols & Holmes, 2002, Hum Brain Mapp).

    1. It’s one thing to say that an object is possible according to the laws of physics; it’s another to say there’s an actual pathway for making it from its component parts. “Assembly theory was developed to capture my intuition that complex molecules can’t just emerge into existence because the combinatorial space is too vast,” Cronin said.
      • Quote
        • "Assembly theory was developed to capture my intuition that complex molecules can’t just emerge into existence because the combinatorial space is too vast,"
        • Author
          • Lee Cronin
    1. Chain letters

      This experience made me understand that it's important to think carefully and not just follow what everyone else is doing. That way, we can avoid being tricked by scams or spreading wrong information.

    1. In what ways have you participated in helping content go viral?

      I have participated in helping content go viral just by interacting with it like reposting, commenting, liking, sharing, etc. It's crazy that you can really help kickstart a post or video go viral by literally just seeing it and adding to the views or simply liking it.

    1. if content generated from models becomes our source of truth, the way we know things is simply that a language model once said them. Then they're forever captured in the circular flow of generated information

      This is definitely a feedback loop in play, as already LLMs emulate bland SEO optimised text very well because most of the internet is already full of that crap. It's just a bunch of sites, and mostly other sources that serve as source of K though, is it not? So the feedback loop exposes to more people that they shouldn't see 'the internet' as the source of all truth? And is this feedbackloop not pointing to people simply stopping to take this stuff in (the writing part does not matter when there's no reader for it)? Unless curated, filtered etc by verifiable human actors? Are we about to see personal generative agents that can do lots of pattern hunting for me on my [[Social Distance als ordeningsprincipe 20190612143232]] en [[Social netwerk als filter 20060930194648]]

    2. It’s difficult to find people who are being sincere, seeking coherence, and building collective knowledge in public.While I understand that not everyone wants to engage in these activities on the web all the time, some people just want to dance on TikTok, and that’s fine!However, I’m interested in enabling productive discourse and community building on at least some parts of the web. I imagine that others here feel the same way.Rather than being a primarily threatening and inhuman place where nothing is taken in good faith.

      Personal websites like mine since mid 90s fit this. #openvraag what incentives are there actually for people now to start their own site for online interaction, if you 'grew up' in the silos? My team is largely not on-line at all, they use services but don't interact outside their own circles.

    1. For $1,900.00 ?

      reply to rogerscrafford at tk

      Fine furniture comes at a fine price. 🗃️🤩 I suspect that it won't sell for quite a while and one could potentially make an offer at a fraction of that to take it off their hands. It might bear considering that if one had a practice large enough to fill half or more, then that price probably wouldn't seem too steep for the long term security and value of the contents.

      On a price per card of storage for some of the cheaper cardboard or metal boxes you're going to pay about $0.02-0.03 per card, but you'd need about 14 of those to equal this and those aren't always easy to stack and access regularly. With this, even at the full $1,900, you're looking at storage costs of $0.10/card, but you've got a lot more ease of use which will save you a lot of time and headache as more than adequate compensation, particularly if you're regularly using the approximately 20,400 index cards it would hold. Not everyone has the same esthetic, but I suspect that most would find that this will look a lot nicer in your office than 14 cheap cardboard boxes. That many index cards even at discount rates are going to cost you about $825 just in cards much less beautiful, convenient, and highly usable storage.

      Even for some of the more prolific zettelkasten users, this sort of storage is about 20 years of use and if you compare it with $96/year for Notion or $130/year for Evernote, you're probably on par for cost either way, but at least with the wooden option, you don't have to worry about your note storage provider going out of business a few years down the line. Even if you go the "free" Obsidian route, with computers/storage/backups over time, you're probably not going to come out ahead in the long run. It's not all apples to apples comparison and there are differences in some of the affordances, but on balance and put into some perspective, it's probably not the steep investment it may seem.

      And as an added bonus, while you're slowly filling up drawers, as a writer you might appreciate the slowly decreasing wine/whiskey bottle storage over time? A 5 x 8 drawer ought to fit three bottles of wine or as many fifths of Scotch. It'll definitely accommodate a couple of magnums of Jack Daniels. 🥃🍸🍷My experience also tells me that an old fashioned glass can make a convenient following block in card index boxes.

      A crystal old fashioned glass serves as a following block to some index cards and card dividers in a Shaw-Walker card index box (zettelkasten). On the table next to the index are a fifth of Scotch (Glenmorangie) and a bowl of lemons.

    1. Clickbait

      I think that I see this most often out of all the ‘gaming’ in these algorithms. I believe that it’s just the easiest way to do it because you don’t need any information, it’s just to promote interaction with whatever is in the link.

    1. Similarly, recommendation algorithms are rules set in place that might produce biased, unfair, or unethical outcomes.

      I just read a reading about design strategy employed by most websites and applications, which indicates that most recommendation algorithms still use the gender binary framework because it's represented by the majority of the audience. However, this approach is obviously biased and unfair by neglecting the diversity of sexuality and users' identity.

    2. who has the power to change Twitters recommendation algorithm, blames the users for the results:

      I don't think it's the users fault, but yes when you pay more attention to content you hate, you will get more of that. This has happened to me before and you just have to ignore the content you hate.

    1. “I was told by [a district reading administrator] that for too long teachers in this district have thought that their job was to create curriculum. I was told that is not our job. Our job is to ‘deliver’ [she makes quote signs in the air with her fingers] curriculum.”

      This has implications for instructional designers and is one of the main reasons why a teacher of record should participate in the design of a course to the fullest extent possible. It isn't just about "buy in". It's about authenticity, authority, and teacher agency.

    1. Author Response

      Reviewer 1 (Public Review):

      In this paper, Reato, Steinfeld et al. investigate a question that has long puzzled neuroscientists: what features of ongoing brain activity predict trial-to-trial variability in responding to the same sensory stimuli? They record spiking activity in the auditory cortex of head-fixed mice as the animals performed a tone frequency discrimination task. They then measure both overall activity and the synchronization between neurons, and link this ’baseline state’ (after removing slow drifts) of cortex to decision accuracy. They find that cortical state fluctuations only affect subsequent evoked responses and choice behavior after errors. This indicates that it’s important to take into account the behavioral context when examining the effects of neural state on behavior.

      Strengths of this work are the clear and beautiful presentation of the figures, and the careful consideration of the temporal properties of behavioral and neural signals. Indeed, slowly drifting signals are tricky as many authors have recently addressed (e.g. Ashwood, Gupta, Harris). The authors are well aware of the difficulties in correlating different signals with temporal and cross-correlation (such as in their ’epoch hypothesis’). To disentangle such slow trends from more short-lived state fluctuations, they remove the impact of the past 10 trials and continue their analyses with so-called ’innovations’ (a term that is unusual, and may more simply be replaced with ’residuals’).

      The terms ‘innovations’ and ‘residuals’ are sometimes used interchangeably. We used innovations because that’s how they were introduced in the signal processing literature (i.e., Kailath, T (1968). ”An innovations approach to least-squares estimation–Part I: Linear filtering in additive white noise.” IEEE transactions on automatic control). We try to be explicit in the text about the formal definition of this quantity, to avoid problems with terminology.

      I do wonder if this throws out the baby with the bathwater. If the concern is statistical confound, the ’session permutation’ method (Harris) may be better suited. If the concern is that short-term state fluctuations are more behaviorally relevant (and obscured by slow drifts), then why are the results with raw signals in the supplement (Suppfig 8) so similar?

      The concern was statistical confound, although this concern is ameliorated when using a mixed model approach and focusing on fixed effects. However, our approach allowed us to assess the relative importance of slow versus single-trial timescales in the predictive relationship between cortical state (and arousal) and behavior, revealing that, in the conditions of our experiment, only the fast timescales are relevant. Because of this, we think that the baby wasn’t thrown out with the bathwater as, qualitatively, no new phenomenology was revealed when the slow components of the signals were included. In hindsight, it is true that the results we obtained suggest that maybe the effort we made to isolate the fast component of the signals was unjustified. However, this can only be known after both options have been tried, as we did. Moreover, we started using innovations based on the results in Figure 2 where, as we show, the use of innovations does make a difference, even at the level of fixed effects in a mixed model. We agree that we could have used the ‘session permutation’ method, but given the depth at which we have explored this issue in the manuscript already, and the clarity of the results, we think that adding a third method would only make reading the manuscript more difficult without adding any substantially new content.

      While the authors are correct that go-nogo tasks have drawbacks in dissociating sensitivity from response bias, they only cursorily review the literature on 2AFC tasks and cortical state. In particular, it would be good to discuss how the specific method - spikes, EEG (Waschke), widefield (Jacobs) and algorithm for quantifying synchronization may affect outcomes. How do these population-based measures of cortical state relate to those described extensively with slightly different signals, notably LFP or EEG in humans (e.g. work by Saskia Haegens, Niko Busch, reviewed in https://doi.org/10.1016/j.tics.2020.05.004)? This review also points out the importance of moving beyond simple measures of accuracy and using SDT, which would be an interesting improvement for this paper too.

      We thank the reviewer for pointing us towards the oscillation-based brain-state literature in humans. We have expanded the paragraph in the discussion where we compare our results with previous work in order to (i) elaborate on the literature on 2AFC tasks, (ii) specifically address the literature linking alpha power in the pre-stimulus baseline and psychophysical performance, and (iii) mention different methods for assessing desynchronization. Our view is that absence of lowfrequency power is a robust measure which can be assessed using different types of signals (spikes, imaging, LFP, EEG). That said, the relationship between desynchronization and behavior appears subtle and variable, specially within discrimination paradigms. These issues are discussed in the paragraph starting in line 527 in the text.

      Regarding the use of SDT, we had already established that our main finding could be expressed as a significant interaction between FR/Synch and the stimulus-strength regressor, when predicting choice after errors (Supplementary Fig. 4A in original manuscript), which is equivalent to a cortical state-dependent increase in d′ after the mice made a mistake. In order to consider a possible effect of cortical state on the ‘criterion’ (i.e., an effect on the bias of the mice towards either response spout), we re-run this GLMM but adding the cortical state regressors as main effects. The results show that the FR-Synch predictor is only significantly greater than zero as an interaction after errors (p = 0.0025). As a main effect, it’s not significantly different from zero neither after errors (p = 0.28), nor after correct trials (p = 0.97). We have included this analysis as Figure 3-figure supplement 1B (replacing the previous Supplementary Fig. 4A) and commented on them in the text (lines 222-225).

      Reviewer 2 (Public Review):

      The relationship between measures of brain state, behavioral state, and performance has long been speculated to be relatively simple - with arousal and engagement reflecting EEG desynchronization and improved performance associated with increases in engagement and attention. The present study demonstrates that the outcome of the previous trial, specifically a miss, allows these associations to be seen - while a correct response appears less likely to do so. This is an interesting advance in our understanding of the relationship between brain state, behavioral state, and performance.

      This is probably just a typo, but we would like to clarify that the relevant outcome in the previous trial is not a miss, but an incorrect choice in an otherwise valid trial (i.e., a trial with a response within the allowed response window).

      While the study is well done, the results are likely to be specific to their trial structure and states exhibited by the mice. To examine the full range of arousal states, it needs to be demonstrated that animals are varying between near-sleep (e.g. drowsiness) and high-alertness such as in rapid running. The fact that the trials occurred rapidly means that the physiological and neural variables associated with each trial will overlap with upcoming trials - it takes a mouse more than a few seconds to relax from a previous miss or hit, for example. Spreading the rapidity of the trials out would allow for a broader range of states to be examined, and perhaps less cross-talk between adjacent trials. The interpretation of the results, therefore, must be taken in light of the trial structure and the states exhibited by the mice.

      We thank the reviewer for the positive assessment of our work and also for raising this point in particular. This motivated us to look more carefully at this issue, with results that, we believe, strengthen our study.

    1. Later, I reviewed the Github plugin for Steampipe, that also implements a search for repositories. It exposes a table called github_search_repository, in which you can fill a query column. That column is defined in the code with Transform: transform.FromQual("query"), which takes its value from the incoming query and reflects it back on the result table. That’s a really clean way of handling precisely that requirement, and I assume that the developers added it just for that. It’s not mentioned in the docs (at least, not that I could find it, nor Google), but some example repositories (or the code autocompletion, if using an IDE that supports it) will reveal to you the hidden secret of the FromQual transform. Thus, when you review the plugin code, you’ll see that there is no Wallet field in the response struct, since it’s not required: the plugin scaffolding will add it.
    1. So, don’t procrastinate and spend too much time perfecting a model. Often, doing that work is just a way to trick yourself into thinking you’re progressing, but in reality it's just procrastination to actual doing.

      good belief to have

    1. The way I see it, there's a spectrum of how much human input is required for a task: Human task 0% Tool 50% Machine 100% When a task requires mostly human input, the human is in control. They are the one making the key decisions and it's clear that they're ultimately responsible for the outcome. But once we offload the majority of the work to a machine, the human is no longer in control. There's a No man's land where the human is still required to make decisions, but they're not in control of the outcome. At the far end of the spectrum, users feel like machine operators: they're just pressing buttons and the machine is doing the work. There isn't much craft in operating a machine. Automating tasks is going to be amazing for rote, straightforward work that requires no human input. But if those tasks can only be partially automated, the interface is going to be crucial.

      Thinking about big ML models and how they can be more like tools than total machines (a distinction without great linguistic provenance but with obvious immediate utility)

  3. Apr 2023
    1. survey

      Afterward, student leaders typically announce the election results to the school. For example, the lead teacher at SHS indicated the below after receiving the election results.

      "We will share these detailed results and insights (including the anonymous data you mention below) with the student leaders, and they will showcase them to the entire school in a presentation. Having this level of detail will help significantly in highlighting how the core values of Sage Hill School are very much in harmony with the core values of the individual students. This will without question strengthen our sense of community even further and encourage future participation in these events."

      This was in response to the 'election results' email format (the email for SHS, to which the lead teacher responded, is pasted below). The intention for this email is to inspire schools to share the results (e.g., at a school assembly) in a way that promotes effective giving more widely.

      "Dear SHS charity elections team,

      Thank you for running a charity election at Sage Hill School! Giving What We Can has reviewed your school's survey data, and we are excited to share the election results below.

      Election Results 1st place: SCI Foundation -- $712 -- 185 votes 2nd place: Clean Air Task Force -- $0 -- 100 votes 3rd place: GiveDirectly -- $0 -- 71 votes Results, SHS.png

      The winning charity in your school's election is the SCI Foundation, and the $712 gift can help protect 1,780 children from schistosomiasis. If student leaders are interested in reflecting on student takeaways from the event, they are welcome to view the anonymous data (password: GWWCSHS23). Students overwhelmingly indicated they "voted for a cause they believe in" and "thought critically about what makes a charity effective," and a selection of anonymous student takeaways are listed below.

      1. It made me want to be conscious about how I help others.
      2. I’ve opened my perspective to the lives others lead and how a few dollars can impact someone enormously.
      3. It’s made me think about the cost effectiveness of charities.
      4. It made me better understand how even a small amount of money can truly make a difference.
      5. This has reminded me that we must think about and help others, not just ourselves.
      6. It made me think more critically on how exactly I could use my money to help others.
      7. I will continue to donate to foundations for a good cause due to the charity elections. I’ve seen a glimpse of where donations go and how they impact others.
      8. I learned to think more specifically about how a charity helps people, and if a charity's money is really being used well.
      9. It made me think about getting the best way to help others and benefit the world.
      10. It makes me want to do more for the world.

      Additionally, there were several students who indicated an interest in the following opportunities. - Taking a leadership role in a charity election next year (48 students) - Participating in a student group related to world poverty (57 students) - Participating in a student group related to improving the long-term future (54 students) - Making a donation to one of the charities in the election (50 students)

      Thank you for making a difference for your school community, as well as those directly benefiting from the charities on the ballot! Please let us know if there is anything else that would be useful in bringing closure to the event.

      All our best, Charity Elections team Giving What We Can"

    1. [Zettel feedback] Functor (Yeah, just that)

      reply to ctietze at https://forum.zettelkasten.de/discussion/2560/zettel-feedback-functor-yeah-just-that#latest

      Kudos on tackling the subject area, especially on your own. I know from experience it's not as straightforward as it could/should be. I'll refrain from monkeying with the perspective/framing you're coming from with overly dense specifics. As an abstract mathematician I'd break this up into smaller pieces, but for your programming perspective, I can appreciate why you don't.

      If you want to delve more deeply into the category theory space but without a graduate level understanding of multiple various areas of abstract mathematics, I'd recommend the following two books which come at the mathematics from a mathematician's viewpoint, but are reasonably easy/intuitive enough for a generalist or a non-mathematician coming at things from a programming perspective (particularly compared to most of the rest of what's on the market):

      • Ash, Robert B. A Primer of Abstract Mathematics. 1st ed. Classroom Resource Materials. Washington, D.C.: The Mathematical Association of America, 1998.
        • primarily chapter 1, but the rest of the book is a great primer/bridge to higher abstract math in general)
      • Spivak, David I. Category Theory for the Sciences. MIT Press, 2014.

      You'll have to dig around a bit more for them (his website, Twitter threads, etc.), but John Carlos Baez is an excellent expositor of some basic pieces of category theory.

      For an interesting framing from a completely non-technical perspective/conceptualization, a friend of mine wrote this short article on category theorist Emily Riehl which may help those approaching the area for the first time: https://hub.jhu.edu/magazine/2021/winter/emily-riehl-category-theory/?ref=dalekeiger.net

      One of the things which makes Category Theory difficult for many is that to have multiple, practical/workable (homework or in-book) examples to toy with requires having a reasonably strong grasp of 3-4 or more other areas of mathematics at the graduate level. When reading category theory books, you need to develop the ability to (for example) focus on the algebra examples you might understand while skipping over the analysis, topology, or Lie groups examples you don't (yet) have the experience to plow through. Giving yourself explicit permission to skip the examples you have no clue about will help you get much further much faster.

      I haven't maintained it since, but here's a site where I aggregated some category theory resources back in 2015 for some related work I was doing at the time: https://cat.boffosocko.com/course-resources/ I was aiming for basic/beginner resources, but there are likely to be some highly technical ones interspersed as well.

    1. Violence is just a part of hip-hop culture, especially because it's something that rappers make songs about. What do you say to that assumption, this thinking that all these tragedies are just expected in this kind of music in this industry?

      violence being a part of hip-hop culture

    1. For example, the older model had a 1 GHz dual-core processor, while the new version has a 1.8 GHz quad-core chip. Since the company is tight-lipped about exactly which processors are powering these devices, it’s hard to make an apples-to-apples comparison, but it sure sounds like the new model should be a bit more responsive.Both models have just 1GB of RAM, but the original InkPad Color had just 16GB of built-in storage, while the new model has twice as much. That said, it seems like the company might have omitted one feature when building the new model: there’s no longer a microSD card reader.

      I have the original PocketBook InkPad color and I generally like it. Most of the improvements in V2 would be welcome (albeit well short of prompting me to upgrade), but dropping the microSD card slot is a bad move. The article suggests that this may have been done to increase the tablet's water resistance, but I will take the SD card over being able to submerge my e-reader in 2 meters of water.

    1. To this day, I can still sing “Just A Girl,” though I promise you that my singing abilities have not improved since then. However, the threshold of what I find embarrassing to do in front of other people has shifted.

      I like how they associate the fear they once had to public speak with a good thought process. Speaking from personal experience the one thing I can't do in front of other people is sing, and that's because I know my singing is bad. But public speaking has always come easier, probably because it's easy to make yourself seem like a confident person over a decent singer.

    1. Brainstorming: Cover for the Second Edition

      Somehow I've always been disappointed with the two dimensional aspects of the pseudo-diagrams on prior books and articles in the space. If you go with something conceptual, perhaps try to capture a multidimensional systems/network feel? It's difficult to capture the ideas of serendipity and combinatorial complexity at play, but I'd love to see those somehow as the "sexier" ideas over the drab ideas people have when they think of their mundane conceptualizations of "just" notes.

      Another idea may be to not go in the direction of the dot/line network map or "electronics circuit board route", but go back to the older ideas of clockworks, pneumatics, and steampunk...

      By way of analogy, there's something sort of fun and suggestive about a person operating a Jacquard Loom to take threads (ideas) and fashioning something beautiful (https://photos.com/featured/jacquard-loom-with-swags-of-punched-print-collector.html) or maybe think, "How would John Underkoffler imagine such a machine?"

      Now that I'm thinking about it I want a bookwheel (https://en.wikipedia.org/wiki/Bookwheel) next to my zettelkasten wheel!

    1. A good way to avoid getting distracted by “new technology sparkles” when coming across new tools is to consider the end product that fulfills your instructional objective—a technique commonly referred to as “Backward Design” or “Understanding By Design”

      Definitely agree with this! There is a curriculum development class that just talked about the importance of designing a lesson plan backwards. Having an end goal in mind is how we should be structure lesson plans and it's interesting to see how this applies to to technology as well. I have definitely gotten distracted by the "sparkle" before and really only had it just to use the tool, but if we backwards design, we can mitigate this problem.

    2. As educators, our goal is to teach students, not just by transferring knowledge to them, but by creating meaningful learning experiences that support their knowledge and skill development.

      This is such an important thing to think about. As teachers it's not just about giving a lesson and having the students listen. It's about being creative and coming up with lessons that engage the students. As a teacher you also have to think about where your students are with what you're teaching them. These are both very important things to keep in mind when you do teach to your students. Having lessons that aren't taught in a way that gets students attention won't be effective to have your students really learn what is being taught to them.

    1. there’s still very little transparency for outside researchers to see what’s spreading, or how

      This feels like a major takeaway for the article. We can all hypothesize and discuss as much as we want, but who of us has access to whats really going on? We only know what these platforms disclose, or what is leaked, which certainly is just the tip of the iceberg. It’s hard to collectively fix a problem that impacts us all when the cause is kept secret.

    1. See in particular: The perils of ‘sharenting’: The parents who share too much

      I see so much of this online and it really concerns me. I feel like children are too young to consent to having information about them online. Parents might think something they are posting online is cute but there is likely not input from the child whether or not they want that shared. This is more recent, so it makes me wonder what the consequences -if any- could be later down the road. I've seen parents post their kids having a meltdown and treating it like it's funny, which just seems like it would create distrust from the child in the future.

    1. “Compost isn’t ideal because it’s still waste, but it’s a better form of waste than the garbage,”

      This direct quote from Hockley-Harrison shows how not all sustainable options are equal, composting being a great example since it is inherently still just waist. she continues on with "Doing a little research ...at the very least, your cupcakes aren’t releasing greenhouse gases from the landfill." solidifying the fact that composting , while still better, does no better then just tossing it.

    1. Implementations MAY also maintain a set of garbage collected block references.

      I'd imagine it's a MUST. Otherwise replica may receive content it just removed and happily keep on storing it. Keeping such 'blacklist' is an ellegant solution. Such lists are ever-growing however and perhaps could be trimmed in some way. E.g., after everybody who's been holding the content signed that it's been removed. Although even then nothing stops somebody from uploading the newly deleted content.

      I guess another solution would be not to delete somebody's content but to depersonalize that somebody. So content stays intact, signed by that somebody, however any personal information of that somebody gets removed, leaving only their public key. That would require for personal information to be stored in one mutable place that is not content-addressed.

    1. This article emphasises how this event affected other people. It mentioned many of the children who suffered and their statements, a pervious school shooting survivor and their perspective on the event. It focuses on guns, not esplishily saying that guns are bad and the cause of all this, but they sure are making us feel bad about it.

      I mean, the last few quotes are just of children being scared and sad. "I don't want to be an only child", "son has been left traumatized" its true, yes, being in a school shooting is definitely traumatizing, especially if it affects you directly, but it's got a very guilt trippy vibe.

    1. Hale came to the school on Monday with two AR-style weapons and a handgun, Drake said.

      Big Big jump in descriptions? Idk emotions? It goes from light and floaty, "she was loved and appreciated", then snaps to "Hale came to school with two AR weapons", boom, big emotion, big contrast.

      It's done on purpose, its an attention grabber, same as the hero turns out to be a villain. Obviously we already know what Hale did, but after six quotes(paragraphs) of Campbell describing Hale as a child and a student, someone who was loved, just a little girl, and that little girl brought 2 big guns to that school and murdered 6 people. It's like the climax of the story.

    2. It's very easy to think that the reason this shooting happened was because something happened in the elementary school while Hale was attending, that made them do this, and that the reason was because of them being trans. But that's not necessarily the case.

      Something did probably happen to them, but they didn't target any certain person during the shooting, so it is probably directed at the place and not the people there. We see that Hale shot the door open, so obviously it's an emotionally fueled action. One of the quotes somewhere idk I don't remember, but one of the quotes from a teacher said that they just shot anyone in front of them, regardless of who they were, child or adult.

    1. I’ve been blessed. While I had a very positive reaction from my priests, I know others who have experienced the complete opposite. They were told that they are sinners, evil or that they’re not Catholic. One of my best friends was even physically carried out of church during Mass after being refused Communion.

      This statement just tells us how difficult it is to change the church and how difficult it is to still be accepted into the church because of their sexuality. It's cool to see that they are finally being accepted into the church and how the church is changing to accommodate for things they haven't accepted in the past.

    1. The result is a different kind of documentary storytelling that is utterly participatory and unexpected, stretching beyond tropes and conventions; forgoing observation in favor of play and experimentation. And, as the filmmakers point out, it means we really get to see these characters for who they are. “There are many documentaries that use the characters to say something about the world,” Van Hemelryck says, painting a hypothetical picture: “a documentary about a war in this or that country; they follow a character, and somehow we don't feel like we really met the character. We don’t know much about them, but following them, we understand a reflection about the war or a specific theme. “Here we wanted to make the contrary, we wanted to really get to know them,” he explains. “In the beginning, it’s just a game—it’s imagination. But more and more, you get into the feeling that you’re connected to the girls, and you don’t care what’s true, what’s not true, what happened to Alis, what’s made up. You connect with the girls very emotionally.

      The directors explain what Alis is based on and why a participatory methodology was important from the very start of the film

    1. Imagine you’re chilling in your little log cabin in the woods all alone. As you start with the 2nd book of the week on a December evening, you hear some heavy foot-steps nearby. You run to the window to see what it was. Through the window, you see a large and seemingly furry silhouette fade into the dark woods just beyond the front yard. The information you received from your environment screams of a bigfoot encounter, but your rational mind tells you that it is far more likely that it’s just an overly enthusiastic hiker passing by

      imaginative start pulls you into the story

  4. learn-us-east-1-prod-fleet02-xythos.content.blackboardcdn.com learn-us-east-1-prod-fleet02-xythos.content.blackboardcdn.com
    1. The industrial complex – the fact that people who are just trying to feed theirfamilies are in detention. It’s jail. And people are making money off that.Corporations are benefiting. It’s horrible. [As for DACA], what about ourfamilies? People who couldn’t finish high school because they are disabled. [Weneed to highlight] the complexities about immigrants in the country. [You can’t]just throw people under the bus if you’re OK [because you have DACA]. The ideathat gay marriage is the only thing we need to worry about. How are trans people

      This reminds me of Sylvia Rivera and what I learned about her in another class. She was an activist who saw the bigger picture it wasn't just about being accepted. It was about the people who were struggling the most, the ones that need help. She wanted those voices to get heard. And I feel like Julio Salgado is trying the same to give them these voice he doesn't want to silence anybody but rather give them a voice and make them have a space to share their stories.

    1. The government can only listen in if they obtain a special type of search warrant that shows that they have exhausted all other possible ways to obtain the information they need.

      This is something I didn't know, but really highlights how dire the internet privacy situation is. In our increasingly connected social climate via the internet, privacy should be something we value and protect. It's very disappointing that there are no procedures in place that allow people to safely use modern technology as they were able to in the past. Also, I feel that internet browsing data is oftentimes more personal than just phone calls and have an even greater need for privacy protection.

    2. Privacy policies need to be more transparent. For instance, you can try to read Comcast’s privacy policy to figure out if they share your browsing history with ad brokers, but the problem is that even I can’t figure that out from the privacy policy.

      I have a hard time understanding the privacy policies myself and I know a lot of other people do too – so it's common for people to skip it and just accept it. Companies do this so their users become unaware of what information is taken from them.

    3. Maine passed a law that protects users’ personal information. In a broadband context, this includes what websites you browse, what apps you have installed on your phone and how you use them, and your GPS location. The Maine law says that your ISP can use that to implement your internet service, but if the ISP wants to use this information for a reason other than providing the service, then they need to give you a choice. The default is that the ISP can’t use it unless you tell them that they can, so it’s an opt-in choice.

      This was interesting to read as I did not know Maine passed a law that protects users personal information but I think it is a great thing. More states should pass always that protect our personal information because there is so much information that can be collected about someone through the information on their phones and it is scary to think about it. Pretty much every app you have installed on your phone has access to all of your data and is using it which is crazy. Up until we talked about it in this class I never realized just how much information on my phone is being tracked by companies and how many people have ahold of all my data and it is scary to think about it. Creating laws that help protect our personal information is great that way you have the choice of giving out your date or not.

    4. Maine passed a law that protects users’ personal information. In a broadband context, this includes what websites you browse, what apps you have installed on your phone and how you use them, and your GPS location. The Maine law says that your ISP can use that to implement your internet service, but if the ISP wants to use this information for a reason other than providing the service, then they need to give you a choice. The default is that the ISP can’t use it unless you tell them that they can, so it’s an opt-in choice.

      I think it's great that Maine passed this law - definitely did not expect Maine to be the one that did it but okay. I feel like there is so much information that someone could collect about you and you don't realize to what extent it goes to. Companies should not have access to this information without asking you first because that is an invasion of privacy, especially from something you pay for. It is an opt-in choice but a way that a company can go around this is to just have a really long privacy policy or something that people won't read

    5. Your web browsing history and your app usage history should qualify as sensitive personal information.

      This is so shocking to me. How is this information not sensitive or private? How does congress not recognize that? Also, even if they recognize it, how will it be enforced? It's so frustrating because it just feels more impossible to regulate the more I learn about it

    1. he energy flows on this planet, and humanity’scurrent technological expertise, are together such that it’s physically possible for us to construct aworldwide civilization—meaning a political order—that provides adequate food, water, shelter,clothing, education, and health care for all eight billion humans, while also protecting the livelihoodof all the remaining mammals, birds, reptiles, insects, plants, and other life-forms that we share andco-create this biosphere with. Obviously there are complications, but these are just complications.

      I think this suggests that it is physically possible for humanity to construct a worldwide civilization that is capable of providing basic needs for all human beings while also protecting the environment and other forms of life on the planet. The author emphasizes the importance of recognizing the interconnectivity of all life forms in the biosphere and suggests that it is essential for a successful civilization to prioritize the protection and livelihood of all beings.

    2. eah yeah. Here we see the shift from crueloptimism to stupid pessimism, or call it fashionable pessimism, or simply cynicism.

      Rightfully so... these claims are not coming out of thin air history has proven that being realistic is important. I wouldn't call it pessimism because there is actual proof that life is not a thing to be optimistic about overall. Optimism is just used as a way to get through life it's not reality.

    3. r maybe weshould just give up entirely on optimism or pessimism—we have to do this work no matter how wefeel about it.

      So in this case, the best way to create a utopia is to accept that utopias must start somewhere, it's not productive to debate endlessly

    4. not just Things are bad, but also We areresponsible for making them bad. And it’s hard not to notice that we’re not doing enough to makethings better, so things will get worse too

      This is very interesting and true. I never looked at dystopian projects like this, I always just thought of it as something on the extreme side of the spectrum.

    5. sincewe can create a sustainable civilization, we should

      yes but it really isn't that easy. i understand where they are coming from. is it plausible? yes, absolutely. however it's extremely difficult when you look at how that would need a structural overhaul of society and those with the funds and power to make this happen choose not to everyday. we are left with the ability to make personal decisions that does help but is insufficient. i just think it's important to acknowledge that rather than saying we should just change civilization now

    1. spam, phishing, malware, and hacking, as well as looking at the tracking capabilities of different apps and websites.

      I think it's really good that people are being taught about this in schools and not just having to either learn the hard way or hope their friends/family would inform them. I especially think it's great that they will cover how you are tracked on apps and websites.

    1. It’s kind of alarming how many websites ask you if you want to allow data tracking across other websites. It may seem surprising that you are being shown so much web content (ads, search results, etc.) that is related to something you were just thinking about or searching, but with the amount of data you provide even subconsciously, that is really not surprising at all.

    1. Rebinding a book for more margin space? .t3_12noly2._2FCtq-QzlfuN-SwVMUZMM3 { --postTitle-VisitedLinkColor: #9b9b9b; --postTitleLink-VisitedLinkColor: #9b9b9b; --postBodyLink-VisitedLinkColor: #989898; } I was thinking about cutting a book's spine and gluing the pages against bigger notebooks to get more margin space to write in with a heat erasable pen. Maybe I could combine this with antinet Zettlekasten cards somehow.That way, I can bring a chapter with me at a time more portably, and erase all the way to notes when I'm done by putting it in the oven.Thing is, I thought I'd do a search to find how someone else did this, but there's nothing on YouTube.Did I miss something?

      reply to u/After-Cell at https://www.reddit.com/r/antinet/comments/12noly2/rebinding_a_book_for_more_margin_space/

      The historical practice of "interleaved books" was more popular in a bygone era. If you search you can find publishers that still make bibles this way, but it's relatively rare now.

      Given the popularity and ease of e-books and print on demand, you could relatively easily and cheaply get an e-book and reformat it at your local print shop to either print with larger margins or to add blank sheets every other page to have more room for writing your notes. For some classic texts (usually out of copyright) you can "margin shop" for publishers that leave more marginal space or find larger folio editions (The Folio Society, as an example) for your scribbles if you like.

      Writing your notes on index cards with page references is quick and simple. These also make good temporary bookmarks. Other related ideas here: https://hypothes.is/users/chrisaldrich?q=tag:%22interleaved%20books%22


      Have I just coined "margin shopping"?

    1. Writer paints a solid picture of the accident that just played out. Also to note how he adds "Its all just part of the experience." showing a general willingness to put up with inconveniences for the sake of the experience

      "The pop splashes out of the cup and all over my shirt, leaving me drenched."

      "... looks at my shirt, tells me how sorry he is, and then I just shake my head and keep walking. 'It’s all just part of the experience,' I tell myself."

    1. Artificial intelligence may be just as strongly interconnected as natural intelligence. The evidence so far certainly points in that direction. But the hard takeoff scenario requires that there be a feature of the AI algorithm that can be repeatedly optimized to make the AI better at self-improvement.

      This is another "may" argument—reality might be shaped in a way which averts doom; we don't know. It's modestly more persuasive than the others, but I still have plenty of probability mass on recursive self-improvement being possible. RLAIF is certainly very "promising" in that direction.

    2. I don't buy this argument at all. Complex minds are likely to have complex motivations; that may be part of what it even means to be intelligent.

      Bostrom's specific claim is that "more or less any level of intelligence could in principle be combined with more or less any final goal". It's not a claim about the complexity of the motivations—just that their goals may be very different from ours.

      Yes, it's possible that the maximizer would want to write poetry. It's also possible that it would want to make a number in its memory be as large as possible. You don't know, and you don't have a good way to reason a priori about which is more likely, so you should treat both as live possibilities.

    3. With no way to define intelligence (except just pointing to ourselves), we don't even know if it's a quantity that can be maximized. For all we know, human-level intelligence could be a tradeoff.

      It's true. It's possible that intelligence can't be maximized, or that it has some low fundamental limit. We don't know. Say you view this "maybe" as a 50-50. That's certainly not a persuasive argument to stop worrying about a potential catastrophe.

    1. Readers do not normally wish books longer, but a couple of discussions are missing from “The Cult of Creativity.” One is about art itself. The early Cold War was a dramatic period in cultural history, and claims about originality and creativity in the arts were continually being debated. Among the complaints about Pop art, when it bounded onto the scene, in 1962, was that the painters were just copying comic books and product labels, not creating. It’s possible that as commercial culture became more invested in the traditional attributes of fine art, fine art became less so.

      I wonder if there's a good book to pair with this one, then

    1. ust so, for no reason.Beyond and besides that there was nothing.

      CONNECT:

      Diane Nguyen, Bojack Horseman. "Because if I don't [write about my trauma], that means that all the damage I got isn't good damage, it's just damage. I have gotten nothing out of it, and all those years I was miserable was for nothing.

    2. But the man who hadexperienced that pleasure was no more: it was as if the memory wasabout someone else

      When I was thinking the other day, I was thinking about how we always think of our childhood as disconnected from our adult selves. Yet, it truly was the same person; much of what I liked back then I like now. Just some of it has been repressed or altered, but it's still me.

  5. inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
    1. There is a widespread belief that Asian-American children are the "perfect" students, that they will do well regardless of che academic setting in which they are placed

      This idea is something that has caused me to devalue my own accomplishments in a way. I never really saw myself as being very smart as it always just seemed like something that was expected of me. For my close Asian friends who were not doing as well in my classes, I've heard of how even marginally lower scores or grades can affect them tremendously, and it's overall a harmful idea to perpetuate.

  6. inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
    1. Another reported, "Almost every day on Call of Duty: Black Ops [a video game involving other online players] I see Confederate flags, swastikas and black people hang-ing from trees in emblems and they say racist things about me and my teammates." Another game-related incident was this one: "Me and my friends were playing Xbox and some kid joined the Xbox Live party we were in and made a lot of racist jokes I found offensive."27

      The racist and offensive imagery in in-game emblems was something that I saw quite a lot on Call of Duty lobbies when I was younger, and it was quite a common occurrence. There seems to have been a toxic subculture in gaming revolving around edgy and dark humor that often is just an excuse to spew racist statements. Even if people believe that they are doing it "just for a bit" or "as a joke" that does not mean that it is nullified of its impact on others. While not as prevalent today, at least from my experience, I know it still exists and it's honestly sad.

    1. To create a social group and have it be sustainable, we depend on stable patterns, habits, and norms to create the reality of the grouping. In a diverse community, there are many subsets of patterns, habits, and norms which go into creating the overall social reality. Part of how people manage their social reality is by enforcing the patterns, habits, and norms which identify us; another way we do this is by enforcing, or policing, which subsets of patterns, habits, and norms get to be recognized as valid parts of the broader social reality. Both of these tactics can be done in appropriate, just, and responsible ways, or in highly unjust ways.

      It's a very true statement that the one way to maintain a healthy and appropriate online community is to enforce and police people's social pattern. But more importantly, everyone should study how to manage their negative emotions and be responsible to their speaking online.

    1. Practical jokes / pranks

      In many cases on the internet, you will see people pull the "It's just a joke" card as a getaway from being held accountable from something that may be very offensive or vulgar. Especially ones with no punchline. In my opinion, the word "Joke" has lost it's meaning, for the reason being that people only use for something to undermine.

    1. "It's a terrible way for us to treat immigrants. The state of Texas has done just about everything wrong that I can think of," Wolff, the county judge, said. "We dehumanize them. We make them people think they are something less than us, and they are not less than us."

      This quote shocked me only because hearing this being said by the county judge. I do not personally know this person or his beliefs but from what I have experienced those in Texas, especially higher officials, tend to lean more towards voting Republican and having those viewpoints.

    1. It follows that any struggle against the abuse of language is a sentimental archaism, like preferring candles to electric light or hansom cabs to aeroplanes.

      This is an interesting point. Is the english language getting worse? When I read something written a long time ago I find it hard to understand, because it's less familiar to me. So is it getting worse or is it just changing? The problem is that anyone would say their language is just fine because it's what they're most comfortable with, so it's hard to compare.

  7. sakai.claremont.edu sakai.claremont.edu
    1. fyou go to Rwanda or Burundi, the purity of social definition is striking:everyone you meet identifies as either Hutu or Tutsi; there are no hybrids,none is “Hutsi.”

      It's notable that none of the people of Rwanda or Burundi identified socially as Hutus despite the fact that Hutu and Tutsi coexisted there, not just as neighbors but also personally and via intermarriage. There were no mixed races; everyone was either Hutu or Tutsi.

    1. Friends

      Friends is a very nostalgic sitcom and it's legacy that lives on forever. It jumpstarted a lot of the casts' careers, but was also brilliantly centered around friendship. Each episode had a storyline to follow and we slowly learned about each individual character. Nothing felt forced and everything was organically done. It just demonstrated how Jennifer Aniston, Courtney Cox, David Schwimmer, and Matthew Perry were naturally talented.

    2. Full House (

      Full House is iconic not just because it's one of the greatest sitcoms from the 90s. Kids were able to learn about life and how to navigate it when tragedy strikes. It also touched on a lot of issues that kids in different households face like abuse and domestic violence. Aside from the phenomenal acting and music, this show emphasized the importance of family. Why family sticks. by each other through thick and thin. How everyone deserves to be loved and treated with respect. Another thing I loved about this show was how it focused on conflict resolution. It will always be a classic and very dear to my heart.

    1. darius kazemi defines a bot ⧉ as 'a computer that attempts to talk to humans through technology that was designed for humans to talk to humans'. this definition sits well with me, when trying to identify just what is so creepy about accidentally talking on the phone to a robot without immediately realising. it's the uncanny valley effect of being unsure if something is human or not, manufactured or natural. just this week, louis vuitton stores have unveiled 'hyperrealistic' robot versions of yayoi kusama, painting their windows, in a move some have noted 'feels morbid' ⧉ (and many have described as 'creepy'). the rise of LLMs like GPT3 hits on this same kind of uncanny valley. they have become almost indistinguishable from humans, requiring us to imagine means of devising a 'reverse turing test' as described by maggie appleton in order to tell them apart.

      Language itself the technology that was meant for humans to talk to humans. People complain about social media sites' bot populaces. If sex spam bots can degrade the Tumblr experience and crypto spam bots can degrade the Twitter experience will these new bots degrade the language experience?

    1. General comments:

      This study carefully delineates the role of magnesium in cell division versus cell elongation. The results are really important specifically for rod-shaped bacteria and also an important contribution to the broader field of understanding cell shape. Specifically, I love that they are distinguishing between labile and non-labile intracellular magnesium pools, as well as extracellular magnesium! These three pools are really challenging to separate but I commend them on engaging with this topic and using it to provide alternative explanations for their observations!

      A major contribution to prior findings on the effects of magnesium is the author’s ability to visualize the number of septa in the elongating cells in the absence of magnesium. This is novel information and I think the field will benefit from the microscopy data shown here.

      I completely agree with the authors that we need to be more careful when using rich media such as LB. It is particularly sad that we may be missing really interesting biology because of that! It’s worth moving away from such media or at least being more careful about batch to batch variability. Batch to batch variability is not as well appreciated in microbiology as it is for growing other cell types (for example, mammalian cells and insect cells).

      For me, the most exciting finding was that a large part of the cell length changes within the first 10min after adding magnesium. The authors do speculate in the discussion that this is likely happening because of biophysical or enzymatic effects, and I hope they explore this further in the future!

      I love how the paper reads like a novel! Congratulations on a very well-written paper!

      Kudos to the authors for providing many alternative explanations for their results. It demonstrates critical thinking and an open-mind to finding the truth.

      Specific comments:

      Figure 2C → please include indication of statistical significance

      Figure 3C → please include indication of statistical significance

      Figure 6A → please include indication of statistical significance

      Figure 8B → please include indication of statistical significance

      Figure S1B → please include indication of statistical significance

      Figure S3B → please include indication of statistical significance

      For your overexpression experiments, do the overexpressed proteins have a tag? It would be helpful to have Western blot data showing that the particular proteins are actually being overexpressed. I think the phenotypes that you observe are very compelling so I don’t doubt the conclusions. Western blot data would just provide some additional confirmation that you are actually achieving overexpression of UppS, MraY, and BcrC.

      Questions:

      Based on your data, there are definitely differences in gene expression when you compare cells grown in media with and without magnesium. Because the majority in cell length increase occurs in such a short time though (the first 10min), I was wondering if you think that some or most of it is not due to gene expression? Do you have any hypotheses what is most likely to be affected by magnesium? Do you think if the membrane may be affected?

      Why do you think less magnesium activates this program of less division and more elongation? Additionally why is abundant magnesium activating a program of increased cell division and less elongation? Do you think there is some evolutionary advantage, especially considering how important magnesium is for ATP production?

      Related to this previous question, I also wonder if this magnesium-dependent phenotype would extend to other unicellular organisms, may be protists or algae? That would be a really exciting direction to explore!

      Regarding the zinc and manganese experiments, why do you think they lead to additional phenotypes compared to magnesium? Do you have any hypotheses?

      Regarding your results that Lipid I availability may be a major a problem for the cell division in the absence of magnesium, do you think that is due to effects magnesium has on the enzymes directly, or do you think magnesium affects the substrate availability/conformation by coordinating the phosphate groups? Or something else, may be membrane conformation?

    1. General comments: 1-This is a really important work because in this day and age, transforming DNA into an organism or a cell is an essential tool for any molecular biologist

      2-And we still don’t understand how to transform the vast majority of organisms on Earth!

      3-I applaud this effort for developing robust and accessible transformation tools for the amoeba Acanthamoeba castellanii. I believe this is an important organism to study but equally important are the general trends/approaches about what works to transform an organism

      4-accumulating more and more of this knowledge on transformation across different organisms is essential if we want to access the biology of many more organisms

      5-Also kudos for the detailed and meticulous transfection optimization! I really enjoyed your use of the N/P ratio and the combinatorial approach over a range of DNA and PEI concentrations. This is solid science!

      Specific comments: 1-Figure 2 → Is it possible to include cell counts, in addition to the RFU signal? This is not a major comment. It’s just that there are cell counts in your other figures so it might be good to include for this figure as well. No need to repeat this experiment if you don’t have cell counts though!

      2-Figure 4 → Would it be possible to include arrows or something to indicate which parts of the figure you would like the reader to focus on? It’s great having all of the data included but it may help your narrative if you point in the figure itself to certain key differences or features of the data. Or you might consider including a table to summarize all the data from the figure? May be the table could contain standard deviation around the average for each treatment (or something to show the distribution of the signal but with a single number)?

      3-Figure 6 → Would it be possible to include indication of statistical significance?

      Questions: 1-How long are plasmids maintained in the transformed cells?

      2-Is there a robust selection that could enable you to produce stably transformed lines?

      3-And related to Q2, is it possible to produce stable mutant lines (gene deletions or gene introductions), perhaps by transiently transforming CRISPR genes?

      4-Does Acanthamoeba castellanii easily undergo transfection in nature? Is there any evidence for that based on its genome?

      5-Related to Q4, are there known viruses that infect Acanthamoeba castellanii? Knowledge about these viruses may inform alternative methods of gene delivery and also serve as evidence for the transfection rate in nature.

      6-I was wondering if you have tried transfecting Acanthamoeba castellanii with one big plasmid containing two fluorescent genes? Does that work? If yes, is the gene expression worse or better compared to having the genes on two plasmids and transfecting the plasmids together?

    1. It’s easy to forget that tone isn’t just conveyed through your voice—it can also be conveyed by facial expressions, hand gestures, and body language, which “help guide facilitation of student learning,” says Samford University professor Lisa Gurley in a 2018 study.

      I am still trying perfect my official teacher look to re-direct my students to what they should be doing.

    1. What are the potential benefits of this example (e.g., it’s funny, in-group identifying)? And who would get the benefits?

      In my experience of seeing online trolls on different social media platforms. Trolling can stir up online arguments among the online community, however just like the Banana Slicer Reviews on Amazon, I think the trolling like the that parody reviews lead to more internet traffic on Amazon.

    1. Many teachers are not experts in every educational technology used for learning, so in what ways can professional learning for teachers align with the ever-evolving world of AI in education?

      I think that this should be a big part of teacher preparation, and that AI in the classroom and students' privacy when using technology should be a required subject for every schools' professional development sessions. In another class, I learned about the school to prison pipeline, and how it's suspected that the government uses children of colors test scores to predict how many jail cells they need to build. While this is based off of written test scores and not technology, if they began to use tracking of students data from technology used in the classroom, this is just one example of how students data can be used. What if a platform like iMovie where many students use voice bites has an ambiguous statement in their privacy policy about their use of voice bites?

    1. "Congress must not censor entire platforms and strip Americans of their constitutional right to freedom of speech and expression,"

      I think this also effects the way many people view the ban as it's not simply a website that people can access where you can purchase goods, services. It's also not a website or service that is dedicated to one thing or a problematic subject. It is a platform in which people upload their own content and are able to have that freedom of expression. With it being a social media platform, there is still that policy of users being able to report problematic content and have accounts be banned for violating terms of service rules. I think that just because the company is owned by another country doesn't necessarily deem it a hazard or be enough for there to be removal. At the end of the day, it is a social media platform, which we have a right to access just like any other platform with the freedom of speech and expression.

    2. The primary concern raised by officials banning TikTok centers on data security, especially fears that user information could end up in the hands of the Chinese government.

      I can see how this argument can be both hypocritical but also justified. On one hand, it seems a bit hypocritical that the U.S. government is concerned with the Chinese government data mining on user info. However, the government doesn't seem to have a problem with data mining when the company that is doing it is domestic, or U.S. based. It's almost surprising that they won't lay down the ban hammer on social media platforms like Facebook, Instagram, or Twitter when they have been found guilty of similar practices. Ultimately, I think it becomes an argument of "you're not allowed to do that, only we can do that".

      Plus, what does that say about people in other countries who also use these platforms? Are they victims of the U.S. accessing their data too? If so, do those respective countries have just as much of a right to ban those social media platforms as well?

      On the other hand, I can see how it could become a potential threat, as U.S. relations with China haven't exactly been the best as of late.

    1. Hardware Brush UpSkip over this section if you’ve read my articles before!The golden rings are known as the quantum computer skeleton. They represent different segmentations of cold that go all the way to close to absolute 0 (0 K) ~15 mKThe side cables are called the nerves. They carry the photons through the quantum computer for signal processing.The inner pole is the heart, where central cooling of physical qubits occurTop tube is the shells, which eliminate thermal fluctuationsThe bottom is the brain, or the QPU, which is a copper-gold-silicon disk that does all of the quantum computing power, and is held inside of a cryoperm shield which stops exposure to electromagnetic radiationProcessing Brush UpCredit: Google Images (GIF)This is a qubit, the fundamental unit of information for a quantum computer.Instead of existing in a classical value like a normal computer’s bit (where it’s 0 or 1), it is in a state of quantum coherence, where its in superposition, meaning its in both 0 and 1, or neither — an outcome which isn’t even possible on a classical computer.Credit: Kurzgesagt (converted to GIF)However, quantum decoherence occurs whenever a quantum bit is exposed to any sort of disruption, including temperature changes, vibration, light, etc. This is the main problem with our current quantum hardware. However, we induce quantum decoherence to get an answer out of a quantum computer by introducing it to a bias, such as a controlled magnetic field that hits the qubit and causes it to collapse into a classical state.Aside from superposition, there are two main other quantum phenomenon that are qubit characteristics: tunneling and entanglement. 🤯EntanglementCredit: Kurzgesagt (converted to GIF)Put simply, quantum entanglement is a property of qubits that when they are placed in a spatial proximity for a certain period of time, their quantum states start to become indistinguishable. This now means that whenever one qubit is affected or changed, the other will be too in some predictable way. As shown in the photos, this essentially means that when the one qubit assumes a classical value, the other will too. This type of relationship between qubits is inseparable, and the bond will remain no matter how far apart or differently oriented the entangled qubits are.TunnelingCredit: Kurzgesagt (converted to GIF)Woah!!! What did that electron just do?!??! Well this is another quantum phenomena called quantum tunneling. Quantum tunneling basically determines the ability for quantum computers to solve problems that classical computers physically can’t. This is becuase electrons can’t move through a barrier whose gravitational potential is higher than their initial kinetic energy, so they’ll just bonk off the barrier and stop trying. However, quantum tunneling means that a free quantum particle can just propogate directly though the barrier with no issues, essentially solving the problem.

      This is all really interesting information, but is it serving your article? A quick document search shows that you only ever touch on a lot of these terms in this section. So, is it necessary for people to know this information to understand your article? Why not "get into the fun stuff" right away? If there is information that's necessary for the reader to understand, you could always sum it up and provide a link before explaining why it's necessary to understand.

      NOTE: You could always link the articles you've posted about these topics at the end, increasing readership to your other work and keeping this article focused on solar powered quantum computers.

    1. As a reader, I’m constantly making a decision about whether to trust that writer or not—not just trust that they are telling the truth, which is the usual journalistic standard of trust, but trust that they are not objectifying the vulnerable people in their stories. As a writer, I’m constantly thinking about this, while I’m reporting and while I’m drafting at my desk, and I’m mercilessly second-guessing myself, because if I don’t, I might screw it up. And then it’s up to the reader to decide whether I’ve succeeded or not

      reminiscent of a situation that Anderson Cooper recounted during his visit of seeing a family of corpses on the roadside in the aftermath of the Rwandan genocide. He noticed that the skin had peeled off one of their hands, and he started photographing that - but unbeknownst to him, a fellow photographer was taking images of him photographing that family. And he kept that picture taken by the other photographer as a reminder of what it looks like to go too far.

    1. Generally speaking, all the authorities exercising individual control function according to a double mode; that of binary division and branding

      It's scary how the people in power can control individuals just through a label. This shows just how ingrained a power dynamic is in modern society.

    1. (A) Phenotypes of 5-day-old atm1-1 plants with reduced root length compared to wild type on 0.5 X Murashige and Skoog (MS) medium without or with 15 mM Sucrose (Suc) at 45 μmol m-1 s-1 LD conditions. (B) Quantitative analysis of root growth in Col-0 and atm1-1 seedlings. n=10; ***P<0.001, two-way ANOVA and Tukey’s multiple comparison test. (

      Is this just redundant w/ Fig 2 BC? I see that the values are a lot higher in Fig 2 BC and those were 6-day old, instead of the 5-day old plants shown here. Sorry if it's an ignorant question.

    1. Note: This rebuttal was posted by the corresponding author to Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      We thank the reviewers for their constructive feedback on our manuscript. They did a very comprehensive and helpful job of laying out some key areas that could be improved. We were heartened by the fact that there was a fair amount of overlap between the two reviewers, and that comments were largely addressable without further experimentation.

      Below, we provide a summary of how we have attempted to address the comments and concerns from both reviewers. We also provide the rationale and action items for our responses. Overlapping comments from both reviewers have been consolidated and responded to together.

      Comment 1 (Reviewer #1, Minor Comment 1 & Reviewer #2, Significance)

      Both reviewers raised concerns about our choice to focus on essential genes in our CRISPRi screen, which could potentially underestimate the role of non-essential factors contributing to Tae1 sensitivity or resistance.

      Rationale: We agree with the reviewers that including non-essential genes could provide additional insights into the roles of non-essential factors in Tae1 sensitivity and resistance. We believe our focus on essential genes contributes a unique perspective to the field, as there already exists a body of work that interrogates non-essential genes in this space. Here are some citations that represent this body. We will highlight these better in the manuscript.

      Lin, H.-H.; Yu, M.; Sriramoju, M. K.; Hsu, S.-T. D.; Liu, C.-T.; Lai, E.-M. A High-Throughput Interbacterial Competition Screen Identifies ClpAP in Enhancing Recipient Susceptibility to Type VI Secretion System-Mediated Attack by Agrobacterium Tumefaciens. Front Microbiol 2020, 10, 3077. https://doi.org/10.3389/fmicb.2019.03077.

      Hersch, S. J.; Sejuty, R. T.; Manera, K.; Dong, T. G. High Throughput Identification of Genes Conferring Resistance or Sensitivity to Toxic Effectors Delivered by the Type VI Secretion System; preprint; Microbiology, 2021. https://doi.org/10.1101/2021.10.06.463450.

      Additionally, our screen was experimentally optimized for essential genes using our approach. The knockdown strategy is useful specifically for essential genes because E.coli is phenotypically very sensitive to essential gene perturbations (see more here: https://doi.org/10.1128/mBio.02561-21). While it would have been ideal to include non-essential genes too, doing so would require a different additional optimization that we believe would have diluted our bandwidth for this study. We do thank the reviewers for recognizing how much effort went into this!

      We do acknowledge this is a limitation and want to make sure the readership is aware of that. Ideally, one could do more rigorous side-by-side comparisons between studies if the approaches, set-up, and assays are the same. Unfortunately, due to differences in experimental set-up, we could not directly compare with the non-essential screens. We hope others will pick up where we left off. Here are some action items we can take to increase the odds of that:

      In the Introduction, we will mention other studies and highlight the need to investigate essential genes side-by-side with non-essential. (Lines 64-7) In the Discussion, we will add a sentence that acknowledges the importance of exploring non-essential genes for a more comprehensive understanding of Tae1 sensitivity and resistance. (Lines 484-5)

      Comment 2 (Reviewer #1, Minor Comment 5 & Reviewer #2, Major Comment)

      Both reviewers mentioned that the dormancy state in msbA-KD cells is not well characterized and its relationship with Tae1 resistance is not convincingly shown.

      Rationale: We agree that our manuscript does not clearly pin down whether Tae1 resistance is linked to a true dormancy state. There are some intriguing similarities between what we observe and what is classically known as “dormancy” or “persistence”, which have specific definitions. Although we don’t yet have a concrete reason to think it’s NOT those states, we also don’t have sufficient data to point to it clearly being the same at a mechanistic or cellular level. This is merely a hypothesis that our work suggests. We would love to see others follow up on this, as we suspect there are overlaps and potentially additional cellular states that have yet to be clearly defined in this field of bacterial physiology.

      Here is how we propose to address this concern:

      We simplified our language to be more descriptive and less loaded in terms of nomenclature around dormancy or persistence. Namely, we are referring to the cells in a more descriptive way with “slowed growth.” This allows us to clearly describe what we observe without attempting to ascribe mechanism or anything beyond that. It doesn’t fundamentally change the overarching interpretation of our study. (Lines 444, 490,497-9) In the Discussion, we will add text emphasizing the need for follow-up studies to fully address whether there is indeed a connection between Tae1 resistance and slowed growth. (Lines 491-3)

      Comment 3 (Reviewer #2, Major Comment)

      The reviewer asks if the degradation of the sugar backbone is also required for lysis or if it is just the crosslinking step that is important.

      Rationale: This is an astute point. We acknowledge that the degradation of the sugar backbone may play a role in lysis, and it’s predicted that this may be why the Pae H1-T6SS delivers a second PG-degrading toxin (Tge1), a muramidase that targets the sugar backbone. The most parsimonious conclusion from past studies by us and others is that Tae1 is critical for lysis, but not sufficient in the absence of any backbone-targeting enzyme. Indeed, many T6SS-encoding bacterial species also encode >1 type of PG-degrading enzyme, which may speak precisely to the reviewer’s point. However, it should also be noted that there may be endogenous enzymes with activities that can be leveraged alongside these toxins for the same effect.

      Action items:

      In the Discussion, we will add a sentence addressing the potential role of sugar backbone degradation in the lysis process and the need for future research on this topic. (Lines 524-6)

      Comment 4 (Reviewer #1, Minor Comment 2)

      The reviewer asks why lptC-KD leads to sensitivity to Tae1, while msbA-KD leads to resistance, considering both genes are implicated in LPS export.

      Rationale: We appreciate the reviewer's careful attention to the underlying biology. They are absolutely correct in pointing this difference out. Our interpretation is that the different phenotypes may indicate that although the LPS biosynthesis superpathway intersects with PG synthesis, lptC and msbA may intersect with PG synthesis in distinct ways. We can address this concern through the following:

      We will add a sentence in the Discussion section providing our interpretation of the different phenotypes observed for lptC-KD and msbA-KD. (Lines 508-13)

      Comment 5 (Reviewer #1, Minor Comment 4)

      The reviewer notes that the contribution of msbA to Tae1 resistance appears minor based on the results in Figure 3d.

      Rationale: There are actually two aspects to this concern, which we note below. We found it difficult to fully capture it in the manuscript, but our thoughts are as follows.

      (1) Technical viewpoint:

      Bacterial competition experiments are inherently noisy. The quantitative read-out is easily impacted by a number of parameters, including cellular density, input ratio between competitor cell types, growth stage, and possibly other environmental factors that are difficult to predict. In general, our view is that we should avoid over-indexing on the degree of the phenotype, focusing more on the direction of the phenotype (loss of statistically-significant Tae1 sensitivity) and the fact that it is reproducible in our hands. Furthermore, our argument is bolstered by clear validation of the loss of Tae1 sensitivity through orthogonal lysis assays (Fig. 4a-c).

      (2) Biological viewpoint

      It is challenging to isolate the specific interaction between Tae1 and individual genetic determinants, as we think it’s a complex system with multiple factors simultaneously at play. It is crucial to acknowledge that the unique contribution of Tae1 is only a part of the T6SS. There may be other compensatory actions that influence the outcomes observed, such as upregulation of non-Tae1 toxins, regulation of system activation/firing, timing and location of T6S injections, etc. We think these are exciting possibilities and that more groups should delve into the context-dependent dynamics of the system. Although outside the scope of our manuscript, we would be open to suggestions for how we can further emphasize this point.

      Comment 6 (Reviewer #2, Minor Comment)

      The reviewer recommends that we discuss whether our findings are specific to Tae1 or if they can be extrapolated to other toxins.

      Rationale: We understand the reviewer's interest in understanding the broader implications of our findings. Although our study focuses specifically on Tae1, we believe that our findings may provide insights into the mechanisms of sensitivity and resistance to other toxins that target the cell wall. However, experimentally investigating this would fall outside the scope of our current manuscript.

      Additional Minor Revisions

      Table 1: I would label MsbA and LptC as "LPS transport" and not "LPS synthesis" (Reviewer 1) Rationale: We agree that using “LPS transport” to describe the gene functions for lptC and msbA is more specific to their functions.

      Table 1 was updated to change the “pathway/process” categorizations for lptC and msbA from “LPS synthesis” to “LPS transport”. In line with this comment, we also changed the pathway/process categorization for murJ (Lipid II flippase) to “PG transport”. Figure 3 legend: "...deformed membranes .........are demarcated in (g) and (h)" (Reviewer 1) We thank the reviewer for pointing out the missing text in this figure legend.

      We corrected the error by adding the missing text back in Figure 3. Line 339-341: Supp. Fig. 9 should be Supp. Fig. 8 (Reviewer 1) Referenced Supp. Fig. was corrected. * Second, (L422-425) the authors conclude that their data demonstrate a "reactive crosstalk between LPS and PG synthesis". I disagree. There is no information in the paper that this is the case. The authors can only suggest that cross talk may occur. (Reviewer 2) We agree. Line 421-2: replaced “demonstrate” with “suggest” to soften the argument. *

    1. We have since improved the worksheet using the students’ feedback by further clarifying some questions and updating instructions for labelling diagrams

      I think the diagram could be a little clearer and maybe it would help the students understand better. For example, making the DNA and RNA strands different colors would make them easier to quickly distinguish. Same for the Cas9 protein vs. the DNA-modifying enzyme. You could also put a box around the target base location just to make it very clear where that empty label box is pointing.

      I would make sure that however you depict the system in the worksheet, it's identical to how it's shown in the slides.

    1. const EACH$ = ((x) => (this.each(x))); const SAFE$ = ((x) => (this.escape(x))); const HTML$ = ((x) => (x));

      In my port of judell's slideshow tool, I made these built-ins. (They're bindings that are created in the ContentStereotype implementation.)

      In that app, the stereotype body is just a return statement. Perhaps the ContentStereotype implementation should introspect on the source parameter and check if it's an expression or a statement sequence. The rule should be that iff the first character is an open param, then it's an expression—so there is no need for an explicit return, nor the escaped backtick...

      This still gives the flexibility of introducing other bindings—like the ones for _CSSH_ and _CSSI_ here—but doesn't doesn't penalize people who don't need it.

    Annotators

    1. His polemeic rhetoric rivals even my own, and the demographics he represents – to the exclusion of all others – is becoming a minority within the free software movement. We need more leaders of color, women, LGBTQ representation, and others besides. The present leadership, particularly from RMS, creates an exclusionary environment in a place where inclusion and representation are important for the success of the movement.

      I'm not a vanguard for the FSF per se, but when I think about the community norms and attitudes that are most exclusionary and turn people away, it's the sort of stuff that Drew and his fans are most often associated with. Stallman at least e.g. views Emacs as something that "secretaries" can be taught. Drew's circle tends to come across as having superiority complexes and holding strong opinions about computing that stops them just short of calling you a little bitch for not being as hardcore as they are...

    1. Reviewer #1 (Public Review):

      The authors evaluate a number of stochastic algorithms for the generation of wiring diagrams between neurons by comparing their results to tentative connectivity measured in cell cultures derived from embryonic rodent cortices. They find the best match for algorithms that include a term of homophily, i.e. preference for connections between pairs that connect to an overlapping set of neurons. The trend becomes stronger, the older the culture is (more days in vitro).

      From there, they branch off to a set of related results: First, that connectivity states reached by the optimal algorithm along the way are similar to connectivity in younger cultures (fewer days in vitro). Second, that connectivity in a more densely packed network (higher plating density) differs only in terms of shorter-range connectivity and even higher clustering, while other topological parameters are conserved. Third, blocking inhibition results in more unstructured functional connectivity. Fourth, results can be replicated to some degree in cultures of human neurons, but it depends on the type of cell.

      The culturing and recording methods are strong and impressive. The connectivity derivation methods use established algorithms but come with one important caveat, in that they are purely based on correlation, which can lead to the addition of non-structurally present edges. While this focus on "functional connectivity" is an established method, it is important to consider how this affects the main results. One main way in which functional connectivity is likely to differ from the structural one is the presence of edges between neurons sharing common innervation, as this is likely to synchronize their spiking. As they share innervation from the same set of neurons, this type of edge is placed in accordance with a homophilic principle. In other words, this is not merely an algorithmic inaccuracy, but a potential bias directly related to the main point of the manuscript. This is not invalidating the main point, which the authors clearly state to be about the correlational, functional connectivity (and using that is established in the field). But it becomes relevant when in conclusion the functional connectivity is implicitly or explicitly equated with the structural one. Specifically, considering a long-range connection to be more costly implies an actual, structural connection to be present. Speculating that the algorithm reveals developmental principles of network formation implies that it is the actual axons and synapses forming and developing. The term "wiring" also implies structural rather than functional connectivity. One should carefully consider what the distinction means for conclusions and interpretation of results.

      The main finding is that out of 13 tested algorithms to model the measured functional connectivity, one based on homophilic attachment works best, recreating with a simple principle the distributions of various topological parameters.<br /> First, I want to clear up a potential misunderstanding caused by the naming the authors chose for the four groups of generative algorithms: While the ones labelled "clustering" are based on the clustering coefficient, they do not necessarily lead to a large value of that measure nor are they really based on the idea that connectivity is clustered. Instead, the "homophilic" ones are a form of maximizing the measure (but balanced by the distance term). To be clear, their naming is not wrong, nor needs to be changed, but it can lead to misunderstandings that I wanted to clear up. Also, this means that the principle of "homophilic wiring" is a confirmation of previous findings that neuronal connectivity features increased values of the clustering coefficient. What is novel is the valuable finding that the principle also leads to matching other topological network parameters.

      The main finding is based on essentially fitting a network generation algorithm by minimizing an energy function. As such, we must consider the possibility of overfitting. Here the authors provide additional validation by using measures that were not considered in the fitting (Fig 5, to a lesser degree Fig 3e), increasing the strength of the results. Also, for a given generative algorithm, only 2 wiring parameters were optimized. However, with respect to this, I was left with the impression that a different set of them was optimized for every single in-vitro network (e.g. n=6 sets for the sparse PC networks; though this was not precisely explained, I base this on the presence of distributions of wiring parameters in Fig 6c). The results would be stronger if a single set could be found for a given type of cell culture, especially if we are supposed to consider the main finding to be a universal wiring principle. At least report and discuss their variability.

      Next, the strength of the finding depends on the strengths of the alternatives considered. Here, the authors selected a reasonably high number of twelve alternatives. The "degree" family places connections between nodes that are already highly connected, implementing a form of rich-club principle, which has been repeatedly found in brain networks. However, I do not understand the motivation for the "clustering" family. As mentioned above, they do not serve to increase the measure of the clustering coefficient, as the pair is likely not part of the same cluster. As inspiration, "Collective dynamics of 'small-world' networks" is cited, but I do not see the relation to the algorithm or results presented in that study. A clearly explained motivation for the alternatives (and maybe for the individual algorithms, not just the larger families) would strengthen the result. 

      Related to the interpretation of results, as they are presented in Fig3a, bottom left: What data points exactly go into each colored box? Specifically, into the purple box? What exactly is meant by "top performing networks across the main categories" mean? Compared with Supp Fig S4, it seems as if the authors do not select the best model out of a family and instead pool the various models that are part of the same family, albeit each with their optimized gamma and eta. Otherwise, the purple box at DIV14 in Fig3 would be identical to "degree average" at DIV14 in S4. If true, I find this problematic, as visually, the performance of one family is made to look weaker by including weak-performing models in it. I am sure one could formulate a weak-performing homophily-based rule that drives the red box up. If such pooling is done for the statistical tests in Supp Tables 3-7, this is outright misleading! (for some cases "degree average" seems not significantly worse than the homophily rules).

      The next finding is related to the development of connectivity over the days in vitro. Here, the authors compare the connectivity states the network model goes through as the algorithm builds it up, to connectivity in-vitro in younger cultures. They find comparable trajectories for two global topological parameters. <br /> Here, once again it is a strength that the authors considered additional parameters outside the ones used in fitting. However, it should be noted that the values for "global efficiency" at DIV14 (the very network that was optimized!) are clearly below the biological values plotted, weakening the generality of the previous result. This is never discussed in the text.

      The conclusion of the authors in this part derives from values of modularity decreasing over time in both model and data, and global efficiency increasing. The main impact of "time" in this context is the addition of more connections, and increasing edge density. And there is a known dependency between edge density and the bounds of global efficiency. I am not convinced the result is meaningful for the conclusion in this state. If one were to work backwards from the DIV14 model, randomly removing connections (with uniform probabilities): Would the resulting trajectory match DIV12, DIV10, and DIV7 equally well? If so, the trajectory resulting from the "matching" algorithm is not meaningful.

      Further, the conclusion of the authors implies that connections in the cultures are formed as in the algorithm: one after another over time without pruning. This could be simply tested: How stable are individual connections in vitro over time (between DIV)? 

      The next finding is that at higher densities, the connections formed by the neurons still have very comparable structures, only differing in clustering and range; and that the same generative algorithm is optimal for modelling them. I think in its current state, the correlation analysis in Fig. 4a supports this conclusion only partially: Most of these correlations are not surprising. Shortest path lengths feature heavily in the calculation of small worldness and efficiency (in one case admittedly the inverse). Also for example network density has known relations with other measures. The analysis would be stronger if that was taken into account, for example showing how correlations deviate from the ones expected in an Erdos-Renyi-type network of equal sizes.

      Yet, overall the results are supported by the depicted data and model fits in Supp. Fig S7. With the caveat that some of the numerical values depicted seem off: <br /> What are the units for efficiency? Why do they take values up to 2000? Should be < 1 as in 4b. Also, what is "strength"? I assume it's supposed to be the value of STTC, but that's not supposed to be >1. Is it the sum over the edges? But at a total degree of around 40, this would imply an average STTC almost three times higher than what's reported in Fig 1i. Also, why is the degree around 40, but between 1000 and 1500 in Fig S2? <br /> Finally, it should be mentioned that "degree average" seems (from the boxplot) to work equally well.

      Further, the conclusion of the "matching" algorithm equally fitting both cases would be stronger if we were informed about the wiring parameters (η and γ) resulting in both cases. That way we could understand: Is it the same algorithm fitting both cases or very different variants of the same? It is especially crucial here, because the η and γ parameters determine the interplay between the distance- and topology-dependent terms, and this is the one case where a very different set of pairwise distances (due to higher density) are tested. Does it really generalize to these new conditions?

      Conversely, the results relating to GABAa blocking show a case where the distances are comparable, but the topology of functional connectivity is very different. (Here again, the contrast between structural and functional connectivity could be made a bit clearer. How is correlational detection of connections affected by "bursty" activity?) The reduction in tentative inhibition following the application of the block is convincing.

      The main finding is that despite of very different connectivities, the "matching" algorithm still holds best. This is adequately supported by applying the previous analyses to this case as well. <br /> The authors then interpret the differences between blocked and control by inspection of the η and γ parameters, finding that the relative impact of the distance-based term is likely reduced, as a lower (less negative) exponent would lead to more equal values for different distances. This is a good example of inspecting the internals of a generative algorithm to understand the modeled system and is confirmed by longer edge lengths in Supp Fig. S12C.

      The authors further inspect the wiring probabilities used internally at each step of the algorithm and compare across conditions. They conclude from differences in the distribution of P_ij values that the GABAa-blocked network had a "more random" topology with "less specific" wiring. This is the opposite of the conclusion I would draw, given the depicted data. This may be partially because the authors do not clearly define their concept of "random" vs. "specific". I understand it to be the following: At each time step, one unconnected pair is randomly picked and connected, with probabilities proportional to P_ij, as in Akarca et al., 2021; "randomness" then refers to the entropy of that process. In that case, the "most random" or highest entropy case is given by uniform P_ij values, which would be depicted as a delta peak at 1 / n_pairs in the present plot. A flatter distribution would indicate more randomness if it was the distribution of P_ij over pairs of neurons (x-axis: pairs; y-axis P_ij). The conclusion should be clarified by the use of a mathematical definition and supported by data using that definition.

      Next, the methods are repeated for various cultures of human neurons. I have no specific observations there.

      In summary, while I think the most important methods are sound, and the main conclusions (reflected in the title of the paper) are supported, the analysis of more specific cases (everything from Fig 3e onwards, except for Fig 5) requires more work as in the current state their conclusions are not adequately supported.

    1. We are little kids that just watched Indiana Jones and so we find some old bungee cords and the hooks of those bungee cords find themselves into our belt loops and we tie the other side’s around the tree and now we are

      Now with internet, for example; tik tok. If I see someone do a dance I like, I do it. It's crazy how much internet has changed the world.

    1. themed restaurant at Disney where every dish is named after a beloved character

      This CHALLENGES. I don't think that naming dishes after characters is immersive theming--i just think it's corny.

    1. A willingness to find meaning in collective patterns may be especially necessary for disciplines that study the past. But this flexibility is not limited to scholars. The writers and artists who borrow language models for creative work likewise appreciate that their instructions to the model acquire meaning from a training corpus. The phrase “Unreal Engine,” for instance, encourages CLIP to select pictures with a consistent, cartoonified style. But this has nothing to do with the dictionary definition of “unreal.” It’s just a helpful side-effect of the fact that many pictures are captioned with the name of the game engine that produced them.

      I think this answers question 2 being "how does the writer establish their credibility" because in the begenining the author explains that the writer and artists borrow knowledge and at the end of the paragraph, it relays a link towards another source for credibility!

    1. Another example of intentionally adding friction was a design change Twitter made in an attempt to reduce misinformation: When you try to retweet an article, if you haven’t clicked on the link to read the article, it stops you to ask if you want to read it first before retweeting.

      I never knew that this was a thing. I feel like it can be really useful if the tweet is from a source that is reputable. Let's say you are trying to retweet that this is a scam site or a site that steals IP addresses, adding friction here just makes the person trying to help others go through a bit more work. Even though it's very minimal, I still think that just these small things can make someone not want to retweet aymore.

    1. The changing politics of credit in the United States helps illuminate these dif-ferences. Until the 1970s, broad demographic characteristics such as gender or race–or high modernist proxies such as marital status or the redlining of poor, primarily Black neighborhoods–were routinely used to determine a person’s creditworthiness. It is only when categorical discrimination was explicitly forbid-den that new actuarial techniques, aimed at precisely scoring the “riskiness” of specific individuals, started to flourish in the domain of credit.14 This did not just change how lenders “saw” individuals and groups, but also how individuals and groups thought about themselves and the politics that were open to them.15 Redlining was overt racial prejudice, visible to anyone who both-ered looking at a map. But credit scoring turned lending risk evaluation into a quantitative, individualized, and abstract process. Contesting the resulting classi-fications or acting collectively against them became harder. Later, the deployment of machine learning–which uses even weaker signals to make its judgments, like using one’s phone’s average battery level to determine their likelihood to repay their loan–made the process of measuring creditworthiness even more opaque and difficult to respond to.1

      Credit is an excellent example also because it's so pointy

    1. within such spaces, participants have found the capacity to establish protective codes of conduct and clear lines of accountability.

      What is this sentence meant to do? If all it's doing is listing "protective codes of conduct" and "clear lines of accountability" as ways that feminists establish intentionally bounded gathering spaces, you can just say "for example, by X and Y". Phrasing it this way is confusing and also spends your word count unnecessarily (you could use those words elsewhere, to provide much-needed elaboration for your many interesting references)

    2. tools.”334

      Okay, wrapping up this deep reading of the paragraph. I think there are a few things going on here.

      First, this paragraph is trying to do way too many things at once. It is trying to introduce "listening at scale", constraint vs enabling, and "tools for conviviality", but because you're trying to fit that all in one paragraph there isn't space to even explain what each thing means, let alone how they relate to each other. Each of these concepts should be at least a paragraph, if not more.

      I think this is symptomatic of a broader issue with the book, which is that it tries to talk about everything and as such doesn't have much time to either teach the reader, or try to persuade the reader. (You do make arguments but you spend so little time on each one, and there are so many, that I find it hard to remember what they even are.)

      In the fiction-writing world we have a concept called "kill your darlings". That doesn't mean "kill off your favorite characters", it means: be willing to cut your favorite characters, scenes, plotlines, descriptions, dialogues, etc, if it is not serving the story. Every single sentence has to justify its existence. If I was your editor I would ask you to look at the book as a whole, determine the half of your arguments/references/quotes/concepts that feel the most crucial to what you're trying to convey, and cut the rest. Then you could use the freed up space to actually explain these concepts and how they relate to your larger arguments.

      I know you're pretty far along in the process, so "cut out half the book" is not helpful advice, but perhaps an approach to consider next time. You clearly have a ton to say and a lot of amazing references/projects/sources to mention, but the way it's done here just feels like it's not making the most of either your talent as a writer nor the material you're trying to present.

    1. 26The Counseling PsychologistTable 1.Criteria and Related Measures for Assessing ExpertiseCriteriaPossible ways of assessing criteria1.PerformanceA.Client-rated working allianceB.Client-rated real relationshipC.Observer-rated responsivenessD.Use of observer-rated theoretically appropriate interventionsE.Observer-rated competenceF.Client-rated multicultural competenceG.Observer-rated responsivenessH.Supervisor-rated competence or responsiveness2.Cognitive functioningA.Observer-rated assessment of cognitive processingB.Observer-rated assessment of case conceptualization ability3.Client outcomesA.Engagement in therapy (percentage of clients who return after intake)/dropout ratesB.Clinically significant change on reports by clients, therapists, significant others, or observers using measures of symptomatology, interpersonal functioning, quality of life/well-being, self-awareness/understanding/acceptance, satisfaction with workC.Behavioral assessments (e.g., fewer missed days of work, fewer doctor visits)4.ExperienceA.Years of experienceB.Number of client hoursC.Variety of clientsD.Amount of trainingE.Amount of supervisionF.Amount of reading5.Personal and relational qualities of the therapistA.Self-rated self-actualization, well-being, quality of life, lack of symptomatology, reflectivity, mindfulness, flexibilityB.Empathy ability (self-rated, nonverbal assessments, observer ratings)C.Nonverbal assessments of empathy6.CredentialsA.Graduation from an accredited training programB.Board certification7.ReputationA.Professional interactionsB.Advancement to positions of honor within organizations based on recognition of clinical expertiseC.Positive feedback and referrals from clientsD.Reports from colleagues/friendsE.Invitations to demonstrate methods in videos, workshops, or booksF.Lack of ethical complaints8.Therapist self-assessmentA.Evaluation of own skillsNote. The criteria are listed in the order of perceived relevance to assessing expertise, from 1 (most relevant) to 8 (least relevan

      Thoughts: So far it appears there is no law about who can diagnose. What there is is: - description of a rubric to grade a expert witness - general description that states cannot operate outside area if training and competence (but how to define that area is absent) - core services / FFPSA law mandating evidence based, trauma Informed, Clearinghouse designated, best available science, meet particular needs of family - law (or in draft) defining trauma Informed - licensing and professional associations standards and code of ethics regarding non black and white values and efforts mandates - there are laws that say if you can call yourself a doctor, therapist, etc, but non if them limit what they can or cannot do - therefore, legally, anyone can diagnose anyone with anything, including DSM codes, and you can take money for it...you just can't call yourself any of the protected titles

      So, when it comes to who is "legally qualified" or a "legally allowed expert", (which is just the expert, and not ultimately the credibility of the "evaluation/recommendation" it comes down to just who can provide a stronger argument that the expert in question is "more expert" than the other "expert". It's the exact same concept as scientific theory. You can't "prove" a scientific theory. You can only provide increasingly stronger (ultimately just means, whether for good reasons or bad, the emotion that something feels stronger or better) arguments that it is true. As in you can't prove "expertise" or that an eval is correct. However, you can "disprove" expertise or scientific theory.

      In psychotherapy there is an enormous gap of a system that gives a credible prediction of what a "provider" is likely to soundly be able to evaluate (and further a system for them to soundly know when and how to refer out). Perhaps some kind of "certifications needed" section for each DSM code.

      So what you can do is: - used the defined law and prof orgs law and ethics as rubrics (like a grading table), the table in this paper is a good one to incorporate, to make an argument of strongest expert. - you can also get more than one expert or experts from different areas which have all of them agreeing - strategy: also send evaluation off to credible authority to get their endorsement - strategy: do that memorandum thing (ABA guide how to influence judges) to advance submit law and argument to judge - all of this is the exact same issue, concept, and strategy to battle "reasonable efforts"

    1. This won't work if your archive is "too big". This varies by browser, but if your zip file is over 2GB it might not work. If you are having trouble with this (it gets stuck on the "starting..." message), you could consider: unzipping locally, moving the /data/tweets_media directory somewhere else, rezipping (making sure that /data directory is on the zip root), putting the new, smaller zip into this thing, getting the resulting zip, and then re-adding the /data/tweets_media directory (it needs to live at "[username]/tweets_media" in the resulting archive). Unfortunately, this will include media for your retweets (but nothing private) so it'll take up a ton of disk space. I am sorry this isn't easier, it's a browser limitation on file sizes.

      Contra [1], the ZIP format was brilliantly designed and natively supports a solution to this; ZIP was conceived with the goal of operating under the constraint that an archive might need to span multiple volumes. So just use that.

      1. https://games.greggman.com/game/zip-rant/
    1. I am extremely gentle by nature. In high school, a teacher didn’t believe I’d read a book because it looked so new. The binding was still tight.

      I see this a lot—and it seems like it's a lot more prevalent than it used to be—reasoning from a proxy. Like trying to suss out how competent someone is in your shared field by looking at their GitHub profile, instead just asking them questions about it (e.g. the JVM). If X is the thing you want to know about, then don't look at Y and draw conclusions that way. (See also: the X/Y problem.) There's no need to approach things in a roundabout, inefficient, error-prone manner, so don't bother trying unless you have to.

    1. Abstract

      Reviewer1-Gavin M Douglas

      Piro and Renard present GRIMER, which is a bioinformatics tool for summarizing microbiome taxonomic data in various ways, with the main purpose of identifying putatively contaminant taxa. The authors convincingly argue that there is great value in looking at several different aspects of a dataset when determining which taxa are potential contaminants. I think this tool could potentially be very useful for the field, but I think at the moment there are several places where users might be confused and perhaps be overwhelmed without more documentation.The main point of confusion I'm concerned about is regarding the "common contaminants". It's not convincing that you can just classify a taxon as a contaminant regardless of what environment is being profiled. Also, under this approach, if a taxon is identified once as a contaminant in an earlier study, would it then be classified as a contaminant in all datasets processed by GRIMER? This would mean that a lot of high-abundance taxa in certain environments would be wrongly thrown out. For instance, you can imagine high-abundance taxa on the human skin might be more likely to be contaminants during sequencing preparation, but of course many researchers are very interested in profiling the skin microbiome. I think the authors realize this, but I'm concerned that typical users may not appreciate this point. I think explicit discussion of this point in the discussion is needed and also an example of how this might look in practice (e.g., if skin microbiome samples were input to GRIMER, as part of a larger tutorial that could be online [see next point], would help avoid this mistake).The authors do a great job of walking through some results in the text, but more documentation is needed for the reports. The authors should include a basic tutorial that provides example input files and then walks through each individual tab. This could done all through text with screenshots of the GRIMER, or perhaps with a video tutorial. In addition, for someone just opening the example reports, I'm sure they will be wondering what data was produced by GRIMER (e.g., they might wrongly think GRIMER did the taxonomic classiciation) and what data was needed as input.The authors should expand on how the correlation step is used to identify contaminants. There is great interest in identifying clusters of co-occurring taxa, so identifying a cluster of 9 genera in Figure 5 doesn't seem like evidence of contamination to me. Perhaps it is when considered with other lines of evidence though, but this should be made clearer. Currently this legend implies that it alone points to reagent-derived contaminationThe figure text needs to be increased in size. Using more panels split across additional rows and removing unnecessary info (e.g., not all control categories need to be shown in Figure 1) would make these figures easier to interpret. I realize that you were hoping to use the raw GRIMER figures, but based on the current display items it does not seem like they are publication ready.The acronym WGS generally refers to "whole genome sequencing" (i.e., for single isolate organisms) not "whole metagenome sequencing". The standard acronym for the latter case would be "MGS", for "metagenomics". Also, the term "shotgun metagenomics sequencing" is mostly commonly used in this context, I've never come across "whole metagenome sequencing" before. Either way, "WGS" will mislead casual readers with the current usage, so this should be changed on your website and in the manuscript.The taxa parsing capabilities sound like they will save a lot of tedious, manual data mapping! Just checking - how does it perform with new taxa names / typos?Text editsL11 - "are challenging task" should be "is challenging"L12 - can remove "by design"L12 - "helping to" should be "to help"L13 - "can potentially be a source" I think should be "that could reflect"L14 - "evidences" should be "evidence"L13 + L14 - Unclear what is meant by "external evidences, aggregation of methods and data and common contaminant" - should be clarifiedL15 - "that perform" should be "that performs"L17 - "towards contamination detection" should be something like "to help detect contamination"L41 - "hypothesis" should be "hypotheses"L42/43 - "analysis can hardly be fully" should be something like "the required analysis is difficult to fully…"L56 - "technicians body" should be "a technician's body"L60 - "strongly affects environmental" should be "especially environmental," (note comma)L64 - "ideal scenario for an" should be "an ideal scenario for"L67 - "not to bias measurements and not to" should be reworded, possibly as: "to not bias measurements and to ensure that bias is not propagated into databases"L75 - "were proposed. They are " should be "have been proposed. These are"L77 - "among others" should be ", and others" (note comma)L79 - "increase in costs" should be "the required increase in costs"L88 - add "a" before focusL90, L196, L265, and elsewhere - "evidences" should be "evidence"L99, L104, L117, and possibly elsewhere - "analysis" should be "analyses" (when plural)L106 - "each samples/compositions" should be "each sample/composition"L110 - add "a" before taxonomy database and "the" before "DNA concentration"L132 - "specially" should be "especially"L134 - remove "a" before "the"L151 - add "of" after "thousands"L182 - "is" should be "are"L196 - "evidences" should be "evidence". And rather than "Evidences towards" it would be correct to say "Evidence for" or "Evidence supporting"L208 - add "the" before "overall"L246/247 - "generated several studies and investigations" should be something like "motivated several investigations"L248 - should be something like "from the maternal and fetal sides"L279 - remove "a"L280 - Add "the" before "Jet"L284 - capitalize "Qiita" and re-word "Pick closedreference OTUs with 97% annotated with greengenes taxonomy"L293 - Should be "Furthermore" rather than "Further"L295 - I think it should be "with low and high human exposure, respectively"? Or do you mean they both have highly variable exposure?L297 - "could be a also an" should be "could be driven by an"L300 - "against" should be "and"L304 - "correlated genus" should be "correlated genera" (and in other cases, such as in the Fig 5 and 6 legends, where "genus" should be plural version, i.e., "genera")L305 - "Such pattern" should be "Such a pattern"L307 - Should be "groups" rather than "organisms groups", or just "genera" as I believe each is a genusL313 - Remove "a"Fig 5 legend: "point" should be "points"Fig 6 legend: "taxa is abundant" should be "This taxon is abundant" and "inversely correlate" should be "inversely correlated". "a contamination evidence" should be "potential contamination"

    1. there

      Reviewer4-Madeleine Geiger

      This well written study integrates different approaches and methodologies to tackle the still obscure nature and origin of the dingo and its sub-populations by thoroughly characterising and comparing an "archetype" dingo specimen. I have read and commented on the abstract and the introduction, as well as the morphology related parts of the methods, the results and the discussion. The methods of morphological comparison, as well as their description and the reporting of the results are sound. However, in some sections it is difficult to comprehend the results and their interpretations, as well as the significance and nature of the suggested "archetype" specimen Cooinda. I therefore made some suggestions for additions and edits to the text and the figures, which hopefully help to increase comprehensibility and consistence of the text (see my comments below). I could not check and comment on the raw data because the links to the supplement given in the manuscript (figshare) do not work. Sorry if I'm stating the obvious here, but to be able to access the raw data is particularly important if the described dingo should act as a reference archetype. L. 74: Add «of the dingo» after "ecotypes": "[…] compare the Alpine and Desert ecotypes of the dingo […]". Otherwise it's not really clear what this is about. L. 91: It's unclear to me what you mean by "this female". I would suggest to exchange this expression with the previously used name of the animal. L. 94 ff.: The conclusions do not really fit to the rest of the abstract, specifically the aims as stated in the beginning. What I read from the "Background" section is that this work is about defining a "dingo archetype" via different approaches (genetic and morphological). The conclusion, however, is centred around the individual Cooinda. I would suggest to open up this section, to also make conclusions concerning the previously stated aims of the paper. L. 105 ff. and L. 369 and L. 508: A very nice opening! However, I feel that there is a somewhat misleading interpretation of the domestication process as a discrete trichotomy: wild > tamed > domesticated, when in fact domestication is a continuum with various stages in between the two extremes of the "wild" and the "intensively bred". There are various forms - even today - of "half-domesticated" populations, such as e.g., many of the Asian domestic bovids, or the reindeer. Thus, I would strongly argue that the dingo - although special due to the almost complete lack of human influence on its evolution in the last millennia - is not the only link between the "wild" and the "domesticated". See e.g.: Vigne, Jean-Denis. "The origins of animal domestication and husbandry: a major change in the history of humanity and the biosphere." Comptes rendus biologies 334.3 (2011): 171-181 L. 117: How do you define "large carnivore"? And: Are dogs more numerous than cats? I don't know the tallies overall, but in many parts of the world domestic cats are more frequent than dogs. L. 120 - 121: I think this sentence does not contribute to the manuscript and I would suggest to delete it. I also think that these are not the usual characteristics to discern the wolf from other canids. 123 - 125: I do not understand this distinction. In my opinion, the dingo could well be both, a tamed intermediate between wolf and domestic dog AND a feral canid. If I understand the current view of dingo evolution correctly, the dingo most probably constitutes an early domestic stage of the dog, which became feral. L. 150: I do not understand the reference to Figure 1 at this point. If you want to keep the figure reference at this place, I would recommend to extend the legend in order to be more descriptive about the significance of this individual dingo. Also: Is the question mark on purpose? Intro and Results in general: Cooinda is central for the research question and the paper. However, I do not really understand her position and significance right away from the text. Maybe this is just a matter of sequence of the paragraphs (some information is given at the beginning of the methods section at the end of the manuscript), but I think it would be crucial to introduce and explain Cooinda and her role (as kind of a reference "archetype") for the aims thoroughly already early on, preferably in the Introduction. This would e.g. also include: why of all the dingoes in Australia is Cooinda an appropriate choice to function as the "archetype". Further, it would be helpful to maybe have a figure showing the geographical distribution of the compared populations (alpine and desert, as well as Cooindas origin) to better understand the setting. L. 320 ff. and Figure 5: Would it be possible to add a visualisation of the shape changes described in the text into the figure? It is otherwise impossible to evaluate these shape changes. L. 328 - 345: It would be interesting to pursue the variation along PC2 further: Do you maybe have information from the raw-data if specimens of both the alpine and the desert group that were found to have particularly low or high values for PC2 are especially young and female, or old and male? In other words, do you find evidence in the dataset that there is an actual age and/or sex gradient along PC2? And what age was Cooinda when she died? L. 347: As also pointed out below, it would be important to note somewhere if these two specimens died at about the same time and/or were similarly treated (because of brain shrinkage in specimens that were frozen or otherwise fixed for a long time). L. 472: I would suggest to rewrite as: "Cooinda's brain was 20% larger than that of a similarity sized domestic dog […]". Further, I do not agree with the rest of the statement in this sentence. One of the hallmark characteristics of domestication is brain size reduction, which might be the result of selection for tameness (which you also describe later on). However, selection for tameness (an evolutionary process within a population) is not the same as taming (on the level of the individual). I would therefore suggest to re-write this sentence. Further and in general concerning the brain size part of this study: It would greatly increase the significance of this part of the work if you would compare the dingo brain size not only to one domestic dog, but set it into a larger context. There are plenty of published references for wolf, domestic dog, and dingo brain size estimates and it would be enlightening to compare your findings with those. Of course, there are methodological issues, but maybe a meaningful comparison is possible for some of them. For this I could recommend this review article: Balcarcel, A. M., et al. "The mammalian brain under domestication: Discovering patterns after a century of old and new analyses." Journal of Experimental Zoology Part B: Molecular and Developmental Evolution (2021). L. 483: Many of the surviving populations of re-introduced (i.e., feral) domestics were part of a fauna that did not correspond to the one of their wild relatives, but was somehow characterised by reduced predation or competition. This was certainly the case for the dingo (few other large predators in Australia) and for some island populations. Maybe you should double-check if this is really the case for the provided examples, but maybe it would be better to write that brain size reduction persists in feral populations at least under certain circumstances. L. 527: Why is it important that the reference dingo is a female? Please explain. L. 535 ff.: Please explain the significance of these special characteristics. Why and how are they special and important for the current study? Also: I'm not a native speaker, but I have the impression that some of the sentences in this section are a bit unusual. Please double-check the grammar. L. 739: What do you mean by "below" in the brackets? L. 741: Is this the right figure reference? I do not find this figure. Do you mean supplementary Figure 9a? 744 - 745: Could you briefly explain in one sentence the nature and number etc. of landmarks used in this reference study? (For those who cannot check the referenced work.) This would be quite important to be able to interpret the results. L. 744: Delete "earlier". L. 755: Could you briefly explain here if these were freshly dead specimens, or if they were already older (e.g. frozen, stored in a liquid etc.) please? This has some implications on brain morphology and size. L. 784 ff.: The figshare-links don't work. L. 884: I would suggest to re-write the sentence like this: "This was required because the brain was removed immediately after death, which caused some damage to the braincase." Supplementary Figure 9c: It's hard to match the reds of the convex hulls with the reds of the legend. Would it be possible to write down the names right next to the corresponding convex hulls? L. 895: Position remains the same relative to which other analysis? Maybe make a reference to text and/or figure (I guess Fig. 5) here.

    1. shocked that a government arts grant should go to a person who had photographed a crucifix submerged in a vial of urine. (Did Andres Serrano think, Beautiful! when those contact sheets came back?)

      you can't dictate what is beautiful to someone else as not beautiful because of your own beliefs. Just say it's not your thing but belief is the real beauty so if someone believes it's beautiful then some part of it is

    2. We understand it, and we don't. It's irreducible; it can't be summarized or described; we feel something we can't describe.

      I think that this statement is just about life in general. We understand why things happen but it can be confusing at the same time. its the same way with art. You can see some ting that is beautiful but you don't know why it is beautiful.

    1. Benefits of sharing permanent notes .t3_12gadut._2FCtq-QzlfuN-SwVMUZMM3 { --postTitle-VisitedLinkColor: #9b9b9b; --postTitleLink-VisitedLinkColor: #9b9b9b; --postBodyLink-VisitedLinkColor: #989898; }

      reply to u/bestlunchtoday at https://www.reddit.com/r/Zettelkasten/comments/12gadut/benefits_of_sharing_permanent_notes/

      I love the diversity of ideas here! So many different ways to do it all and perspectives on the pros/cons. It's all incredibly idiosyncratic, just like our notes.

      I probably default to a far extreme of sharing the vast majority of my notes openly to the public (at least the ones taken digitally which account for probably 95%). You can find them here: https://hypothes.is/users/chrisaldrich.

      Not many people notice or care, but I do know that a small handful follow and occasionally reply to them or email me questions. One or two people actually subscribe to them via RSS, and at least one has said that they know more about me, what I'm reading, what I'm interested in, and who I am by reading these over time. (I also personally follow a handful of people and tags there myself.) Some have remarked at how they appreciate watching my notes over time and then seeing the longer writing pieces they were integrated into. Some novice note takers have mentioned how much they appreciate being able to watch such a process of note taking turned into composition as examples which they might follow. Some just like a particular niche topic and follow it as a tag (so if you were interested in zettelkasten perhaps?) Why should I hide my conversation with the authors I read, or with my own zettelkasten unless it really needed to be private? Couldn't/shouldn't it all be part of "The Great Conversation"? The tougher part may be having means of appropriately focusing on and sharing this conversation without some of the ills and attention economy practices which plague the social space presently.

      There are a few notes here on this post that talk about social media and how this plays a role in making them public or not. I suppose that if I were putting it all on a popular platform like Twitter or Instagram then the use of the notes would be or could be considered more performative. Since mine are on what I would call a very quiet pseudo-social network, but one specifically intended for note taking, they tend to be far less performative in nature and the majority of the focus is solely on what I want to make and use them for. I have the opportunity and ability to make some private and occasionally do so. Perhaps if the traffic and notice of them became more prominent I would change my habits, but generally it has been a net positive to have put my sensemaking out into the public, though I will admit that I have a lot of privilege to be able to do so.

      Of course for those who just want my longer form stuff, there's a website/blog for that, though personally I think all the fun ideas at the bleeding edge are in my notes.

      Since some (u/deafpolygon, u/Magnifico99, and u/thiefspy; cc: u/FastSascha, u/A_Dull_Significance) have mentioned social media, Instagram, and journalists, I'll share a relevant old note with an example, which is also simultaneously an example of the benefit of having public notes to be able to point at, which u/PantsMcFail2 also does here with one of Andy Matuschak's public notes:

      [Prominent] Journalist John Dickerson indicates that he uses Instagram as a commonplace: https://www.instagram.com/jfdlibrary/ here he keeps a collection of photo "cards" with quotes from famous people rather than photos. He also keeps collections there of photos of notes from scraps of paper as well as photos of annotations he makes in books.

      It's reasonably well known that Ronald Reagan shared some of his personal notes and collected quotations with his speechwriting staff while he was President. I would say that this and other similar examples of collaborative zettelkasten or collaborative note taking and their uses would blunt u/deafpolygon's argument that shared notes (online or otherwise) are either just (or only) a wiki. The forms are somewhat similar, but not all exactly the same. I suspect others could add to these examples.

      And of course if you've been following along with all of my links, you'll have found yourself reading not only these words here, but also reading some of a directed conversation with entry points into my own personal zettelkasten, which you can also query as you like. I hope it has helped to increase the depth and level of the conversation, should you choose to enter into it. It's an open enough one that folks can pick and choose their own path through it as their interests dictate.

    1. Recommended Resource:

      I recommend adding this doctoral research article on developing open education practices (OEP) in British Columbia, Canada. The scholarly article is released by Open University, a U.K. higher education institution that promotes open education.

      Paskevicius, M. & Irvine, V. (2019). Open Education and Learning Design: Open Pedagogy in Praxis. Open University, 2019(1). DOI: 10.5334/jime.51

      A relevant excerpt from the article reveals the study results that show OEP enhances student learning:

      "Furthermore, participants reflected on how inviting learners to work in the open increased the level of risk and/or potential reward and thereby motivated greater investment in the work. This was articulated by Patricia who suggested “the stakes might feel higher when someone is creating something that’s going to be open and accessible by a wider community” as well as Alice who stated “students will write differently, you know, if they know it’s not just going to their professor.” The practice of encouraging learners to share their work was perceived by Olivia to “add more value to their work,” by showing learners the work they do at university can “have an audience beyond their professors.”"

    1. Snapchat may share your data with other Snapchatters, business partners, the general public, affiliates, and third parties.

      I think Snapchat is a great example of an app that a lot of us use daily that collects a LOT of information. It knows where you are, who you're with, where you go to school, and who you communicate with the most. Not only that, but it stores images of you...I can't fully wrap my head around what that even means / what that would look like, or how you could really guarantee a picture you took was deleted if you wanted it to be. It's one of the reasons why my parents didn't allow me to have social media until high school, and why I really hate to see young children on social media. There just seem to be so many risks!

    2. Similarly, end-user license agreements (EULA) and terms of service (TOS) agreements feature opaque language that may cause you to give away your right to privacy without truly understanding what you are doing when you click “I agree.”

      I can relate to this a lot because I often will just click the "agree" button without actually reading through it because it's too long.

    3. Similar to Snapchat, Twitter collects, uses, and shares a significant amount of data from users.

      This is so scary about social media, and it happens with every app, website, and digital tool. It is so hard to tell when a website or app is tracking you, or collecting your data. It truly feels like we have no privacy anymore, even when you think it's a "private" message. Apps and websites get you in this way because yes, they have a privacy policy, but it's microscopic print, 100 pages long, and not easily accessible. Who is going to actually read that? No one really knows what it says in any privacy policy statement online. I feel like the privacy invasion and collection of data from online users is way too intrusive and just crazy to think about. Where is my information going? Who is seeing it? How long will it be out in the world? Everyday, when I use Tiktok or Instagram, I get videos, photos, and ads that are completely geared towards me and my interests. Sometimes, I even get random posts on my feed about something completely from left field that I had been thinking about earlier that week! The whole thing is so creepy and I wish the internet was safer and more private. It is important for kids and adults to understand the risks of using digital tools, in school or out of school, and act accordingly.

    1. finding a way to do a "git pull" without having to write a commit message (does --rebase do that?) would help in a huge way

      It might "help" but it defeats the entire purpose of the recordkeeping endeavor.

      If you don't care about the recordkeeping aspect and are just using Git to sync stuff between machines, then you're not really using Git and should stop trying to use it and use something else. (A better option, of course, is to think about it long enough to understand why recordkeeping is good and then take the time to write commit messages that don't suck and not treat it as an arbitrary and pointless hurdle. It's not pointless; there's a reason it was put there, after all.)

    2. The class was sharp and realized there had to be a better way. I said git worked better if everyone took their turn and did check-ins one-at-a-time.

      Except, of course, branching and merging mean that this hurdle isn't a necessary one. Git was designed from the beginning so that this would be a non-issue (or at least not as bad as what this class experienced); that's where the D in DVCS comes from, after all...

      (And I thought that's where this was going—! Rather than just giving people the solution—in this case branches/remotes—and telling them to use it, then what you do is you let them experience the problem firsthand and then can appreciate the solution and why it's there. Really surprised that's not where this ended up.)

    1. When most people think of innovation, it’s likely the offering category that comes to mind.

      This is just the offering of different innovated ideas that allow people to spread their ideas and give different designs.

    1. See also, this saying in statistics: All models are wrong, but some are useful

      I have never heard of this saying before, but just the phrase itself helps me understand what it's trying to say. There are many precautions that go into collecting data so that the data can be as precise as possible, yet there are many variables that can vary the data, thus making a model imperfect. Yet there are models that implement methods to limit these imperfections using the knowledge they have at the moment to make the best model possible in the moment.

    1. For context: You join a team and there are 15 different roles and 7 members. On a good day we try to process the three roles on a Scrum Team and get nothing but confusion. Now picture the same Scrum team and there are 15 roles they chose from. There is not a manager, business partner, or AVERAGE person who is going to memorize all of these and what they do and the nuance between them. It's just too much. 1/2 or 3/4 taken away would be a better start. I appreciate what you are trying to accomplish it's that we must consider the lowest common denominator (majority) when we introduce these concepts. If we want to gamify this no problem it's temporary but this as asking for more.

    1. What if the Honorable Harvest were the law of the land? And humans—not just plants and animals—fulfilled the purpose of supporting the lives of others?

      I often wonder about this question in the literal sense... what would the effects be of tightening laws and restrictions, would that really help humans to appreciate the earthly harvest we so often take for granted? I think it's important to ask ourselves if we are supporting our fellow humans when we are taking harvest, we need to more thoughtful and this to me is what this line is implying.

    1. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #1

      Evidence, reproducibility and clarity

      This manuscript valuably contributes to understanding how mosquito eggs survive desiccation: the authors establish that, during desiccation, the Ae. aegypti egg's TCA cycle and other metabolic pathways change in order to accumulate polyamines - these provide physical protection during desiccation - and breakdown of lipids which is required for both accumulating polyamines and fuel the recovery process once rehydration occurs (thereby helping the egg hatch after rehydration). The authors also establish that desiccation kills the eggs of another mosquito specie, An. stephensi, in which the above processes don't occur to provide protection during desiccation.

      Much of the study uses mass spectrometry of desiccated eggs of Ae. aegypti to determine proteomic changes that occur during desiccation. Interestingly, these included increased superoxide dismutase, glutathione transferase, and theioredoxin peroxidase - all of these regulate the homeostasis of redox processes in cells. These are particularly interesting because, as the authors noted, other studies in different organisms had shown that Reactive Oxygen Species (ROS) are created during desiccation. These results thus suggest that the results of this study would be of interest to those studying desiccation of dauer C. elegans and yeast. Interestingly, recent studies have shown that ROS and glutathione (and other ROS-reducing enzymes) are the key determinants of whether yeast survives or not at extremely high and low temperatures. Some differences were observed though. For example, unlike in desiccated yeast and C. elegans, Intrinsically Disordered Proteins (IDPs) weren't upregulated during desiccation of the mosquito eggs.

      For the most part, the experiments and analyses are rigorous and technically sound. The presentation and writing are clear, for the most part. But there are some aspects of the analyses and presentation that might benefit from clarifications. I specify these below.

      I support the publication of this work with very minor revisions. The only additional experiment that I can recommend is in point #1 below (doing gel and mass spec on at least one intermediate day during desiccation instead of just at the final day (day 21) which is what has been done). But since mass spectrometry is expensive and time-consuming, this experiment is only suggested but not absolutely necessary. The authors' major conclusions are still valid without this additional experiment. It's just that we don't know how fast the proteomic changes are occurring during desiccation without some timcourse as the one that I suggest here. Perhaps this point can be mentioned as a deficiency of the current work in the discussion, in lieu of doing the additional experiment.

      Major points:

      1. I was hoping to see the gel run for various days of desiccation to support the conclusion that the proteome remodeling occurs during the desiccation. Right now, the data in FIg. 2 come from a single day - 21 days post desiccation - so it still shows that proteomic remodeling happened during those 21 days but not exactly on which days.
      2. In Fig. 2B: unclear what you're using as a reference to say that "45 proteins increased and 125 porteins decreased in amounts" (L147-148). Relative to fresh eggs that were laid 48 hours ago? Why is this a good reference instead of, say, fresh eggs that are 21 days old (same age as the desiccated eggs)?
      3. L90-L91: "...dried for up to 21 days" But the methods section states that the eggs were dried for 10 days on Whatman filter paper. The 21 days refers to the fact that the authors looked at eggs that were stored for 21 days after the 10 days of desiccation, no? Isn't that why the x-axis goes up to 21 days in Fig. 1C? Please clarify.
      4. Fig. 1C: related to above. What does "0 day post desiccation" mean in the x-axis? Is this 10 days of desiccation on Whatman paper + 0 day of storage? Similarly, what is 12 days or 21 days post desiccation on the x-axis? These are 10 + 12 days and 10 +21 days respectively?
      5. Methods section on desiccation is very unclear (related to above). I cannot determine what the days in Fig. 1C mean based on this methods section and the main text (and caption for fig. 1c).
      6. Fig. 2A: what are "D1" and "D2"? These are two trials of desiccation? For each lane (e.g. D1), did you combine 150 eggs and lysed them together for the single lane in the gel? Specify these points in the caption.
      7. Related to above: Does the "21 day" correspond to 21 days post desiccation (i.e., "21" in the x-axis of Fig. 1C)? Or something else? Please specify in the figure caption.
      8. L145-146: What is emPAI score? Give a one-sentence explanation.

      Significance

      I support the publication of this work with very minor revisions. The only additional experiment that I can recommend is in point #1(doing gel and mass spec on at least one intermediate day during desiccation instead of just at the final day (day 21) which is what has been done). But since mass spectrometry is expensive and time-consuming, this experiment is only suggested but not absolutely necessary. The authors' major conclusions are still valid without this additional experiment. It's just that we don't know how fast the proteomic changes are occurring during desiccation without some timcourse as the one that I suggest here. Perhaps this point can be mentioned as a deficiency of the current work in the discussion, in lieu of doing the additional experiment.

    1. Critical ignoring is the ability to choose what to ignore and where to invest one’s limited attentional capacities. Critical ignoring is more than just not paying attention – it’s about practising mindful and healthy habits in the face of information overabundance.

      I am glad to have read this. I never knew what it was called for something I chose to start doing a few years ago. I read many views of news, even opposing views to see if I am understanding something wrong or being misled from just one source. But when the story does not stick with me or is a topic that will just do no help to anything, I just scroll on by. I had no idea there was a name for it, in topics, I feel I could not change or would change my mind or yours. The best example is almost any argument about the statement "defund the police" it immediately tiggers any one but very opposing meanings exist. I immediately said well that's a terrible phrase. Its catchy, but misleading and devisive and almost every story is factually wrong at some point in that topic. It rallied people, but not for the good, and both political sides have had to walk back statements. It was a marketing genius move, but terrible for society. I critically ignore any news story with those words.

    2. Critical ignoring is the ability to choose what to ignore and where to invest one’s limited attentional capacities. Critical ignoring is more than just not paying attention – it’s about practising mindful and healthy habits in the face of information overabundance.

      I am an advocate for this form of practice when it comes to misinformation and what information to take lightly or heavily invest in. With the amount of misinformation that circles around on a daily, we have to remember to disregard and not focus on false information, but instead put all our focus on accurate news that needs to be shared and addressed. This is a practice that would benefit us in many different ways.

    3. Their business model auctions off our most precious and limited cognitive resource: attention.

      I think that this is something everyone needs to consider when encountering mis/disinformation on the internet. Most false and bad news on the internet is written in a way to cause an EMOTIONAL response. But something that is less talked about is sometimes this causes a NEGATIVE emotional response, ie "I disagree strongly with this". The person who see's mis/disinformation on the internet that they strongly disagree with WILL STILL SHARE IT!! And add their thoughts on why its wrong/harmful/bad. But this defeats the purpose because regardless of your comments it's still bring SPREAD. So if you see something on the internet that you is wrong/false/fake, just click the "report" button and let the moderators do their jobs.

  8. Local file Local file
    1. e net effect of these steps forward is more information in the hands ofcitizens. In the twenty-first century, access to information has reached a newhigh. e question we now face is how to proceed: How do we identify, withour new tools and options, what information is reliable?

      its cool to think about how this is time where we have hte most access to information we've had compared to the past, but reliability has also become a huge thing to consider. it's just that we HAD to rely on the news outlets, but now we use news outlets as sources to DECIDE waht to rely on.

    Annotators

    1. it's better than RSS but RSS just seems a better brand-name

      Isn't that pretty interesting? You'd think it would be the other way around.

      In fact, what if it is the other way around? What if the failure of classic/legacy Web feeds has to do with power users' insistence on calling it "RSS"?

    1. Little did I know that Tesla had updated to a new UI just a few days earlier. It’s been more than a bit controversial: A UX designer had just walked into a Tesla bar with a hornets nest on the floor.

      Tesla is always updating and progressing but not everyone is aware.

    1. assessment must be done carefully and sparingly lest students become so concerned about their achievement (how good they are at doing something — or, worse, how their performance compares to others’) that they’re no longer thinking about the learning itself.

      I catch myself worrying more about how I do on exams compared to other students rather than just enjoying learning for the sake of knowledge sometimes. I think it is okay to be tested on your skills and what you have learned, but when students are put into a lower percentile and when scores are posted, it's very intimidating and creates a mentality to try to achieve better than others rather than collaborative learning.

    1. We don’t want precious minerals and metals to be the new plastic. E-waste is not pollution, nor is it waste - it’s a vital resource we are only just starting to value in full.

      Most of E-Waste can be found in developing countries such as Africa. In the name of ‘recycling’ and ‘donation’, developed countries export E-Waste to developing countries, and people in developing countries select and sell valuable parts such as copper and gold from the E-Waste. E-Waste that is improperly incinerated generates toxic chemicals, and E-Waste that is left unattended or landfilled contaminates soil and water quality with mercury and lead.

      https://eridirect.com/blog/2017/06/why-is-e-waste-being-shipped-to-developing-countries/#:~:text=Most%20of%20the%20e%2Dwaste,their%20people%20or%20the%20environment.

    1. How do I store when coming across an actual FACT? .t3_12bvcmn._2FCtq-QzlfuN-SwVMUZMM3 { --postTitle-VisitedLinkColor: #9b9b9b; --postTitleLink-VisitedLinkColor: #9b9b9b; --postBodyLink-VisitedLinkColor: #989898; } questionLet's say I am trying to absorb a 30min documentary about the importance of sleep and the term human body cells is being mentioned, I want to remember what a "Cell" is so I make a note "What is a Cell in a Human Body?", search the google, find the definition and paste it into this note, my concern is, what is this note considered, a fleeting, literature, or permanent? how do I tag it...

      reply to u/iamharunjonuzi at https://www.reddit.com/r/Zettelkasten/comments/12bvcmn/how_do_i_store_when_coming_across_an_actual_fact/

      How central is the fact to what you're working at potentially developing? Often for what may seem like basic facts that are broadly useful, but not specific to things I'm actively developing, I'll leave basic facts like that as short notes on the source/reference cards (some may say literature notes) where I found them rather than writing them out in full as their own cards.

      If I were a future biologist, as a student I might consider that I would soon know really well what a cell was and not bother to have a primary zettel on something so commonplace unless I was collecting various definitions to compare and contrast for something specific. Alternately as a non-biologist or someone that doesn't use the idea frequently, then perhaps it may merit more space for connecting to others?

      Of course you can always have it written along with the original source and "promote" it to its own card later if you feel it's necessary, so you're covered either way. I tend to put the most interesting and surprising ideas into my main box to try to maximize what comes back out of it. If there were 2 more interesting ideas than the definition of cell in that documentary, then I would probably leave the definition with the source and focus on the more important ideas as their own zettels.

      As a rule of thumb, for those familiar with Bloom's taxonomy in education, I tend to leave the lower level learning-based notes relating to remembering and understanding as shorter (literature) notes on the source's reference card and use the main cards for the higher levels (apply, analyze, evaluate, create).

      Ultimately, time, practice, and experience will help you determine for yourself what is most useful and where. Until you've developed a feel for what works best for you, just write it down somewhere and you can't really go too far wrong.

    1. Bots, on the other hand, will do actions through social media accounts and can appear to be like any other user. The bot might be the only thing posting to the account, or human users might sometimes use a bot to post for them.

      There are many bots on different social media platforms I use. And just like this says, they're used to imitate a human posting. There are different reasons for what a bot can be posting, like on Twitter, there are certain bot accounts that I see used purely for entertainment, like a bot that will post a random meme every hour, or a bot that posts a line from the Shrek script every minute. And there can be other bots used for moderation like on Discord, or Reddit. But these are mostly used to make a human's jobs on social media just a little bit easier, whether it's for entertainment, moderation, or something completely different.

    1. the green team got a point excellent okay so what's a way that you guys could fix this sentence and I want to hear from someone I haven't heard from yet how can we fix this sentence just feel 00:12:33 free to shout it out comma but we also have to have a fan voice right right say like I'm not a fan of jazz music comma [Music]

      It seemed like you got responses, but usually shouting out answers is pretty reserved for the kids who always participate, rather than those who struggle with speaking up! I know it's cliche, but maybe having them turn and talk and then cold-calling would be more helpful!

    1. The issue at play in the AI question, or the question of tempering our growth in general, isn’t just that our technology is built without higher values that can mitigate its excesses. It’s that culturally we lack a story as to why values even matter to begin with. It’s futile to appeal to ethics in this context, because the ethics aren’t embedded at a deep enough level to counter powerful incentive structures. They aren’t worth dying for, because the system doesn’t value them, it only values quantity.

      Key observation - Quantity is all modernity values - Quality is thrown out the window - Later, the author connects - quantity to the Cartesian world view, - that seeks to measure everything - and quality to the Idealist worldview - that elevates consciousness over physicalism and materialism - (Destructive) growth - is an outcome of the cartesian worldview

    2. To see why a humanistic stance isn’t enough to create ethical technology, let’s imagine for a moment that Moloch is more than just a metaphor. Instead, it’s an unseen force, an emergent property of the complex system we create between all our interactions as human beings. Those interactions are driven by behaviors, memes, ideas and cultural values which are all based on  what we think is real and what we feel is important.

      Paraphrase - why is a humanistic stance enough to create ethical technology? - imagine that Moloch is an unseen force, - an emergent property of the complex system we create between - all our interactions as human beings. - Those interactions are driven by: - behaviors, - memes, - ideas and - cultural values which are all based on<br /> - what we think is real and - what we feel is important.

    1. See a new photo of a red panda every hour

      I am a huge fan of the "(animal) a (time)" bots. Some of my favorites range from fish every hour to every beanie baby an account dedicated to posting a beanie baby whose birthday corresponds to the current day. While these are all examples of friendly bots, along with the red panda one mentioned, I think it's interesting that this kind of format does not necessarily guarantee something to be friendly or positive all of the time, especially considering what is contained within the breadth of information they are taking from. A primary example is with the hourly Sylvia Plath bot which posts short snippets of the late poet's work throughout the day. This is a positive bot for the most part, until it starts posting the parts of her work where she uses the n-word. Ignoring contemporary discussions about censorship and the legacy of writers in history, I think that this is just something interesting to think about in this bot conversation.

    1. and he thought we ought to take advantage of it.  Then he walked behind the podium and started the day’s lecture.

      This is a very interesting sentence! It's just the end of the paragraph. But it sums up the point: all the actions up to this point were because of this statement. This is something that has to be faced, no matter how bloody, no matter what camp, no matter what opinion, this is what medical students should do.

    1. Indeed, though the classic network era is defined by NBC’s, CBS’s and ABC’s unprecedented stability and control of the medium, the history of this era has been most productively written through examinations of the unevenness, struggles, tensions, and fissures that consistently troubled this veneer. Take, for instance, the “national”‐ness of the Big Three networks. As my Heartland TV argues, just as the networks became truly national in audience/market reach (not extending reliable reach to the rural Plains and deep South until the mid‐1960s) they simultaneously became intensely “local,” “concentrating all network production and business operations in New York City and Los Angeles by the late 1950s” (Johnson 2008b: 37). Through the 1960s broad public debate raged over questions of the medium and the “national purpose.” These debates point to a broader conceptualization of US network TV of the classic era as a “cultural forum” as Newcomb and Hirsch (1987b) argued, situated between understanding TV as a mere conduit of communication from sender to receiver and thinking of TV as a discrete set of texts. Conceptualizing television as a cultural forum at the interstices of industry/economics, texts/program address, social/historical context, and audience reception underscores the Big Three’s relevance and even requisite centrality in navigating “our most prevalent concerns, our deepest dilemmas” (Newcomb and Hirsch 1987b: 459). In an era characterized by the Cold War, John F. Kennedy’s New Frontier, the civil rights movement, and the Vietnam War, government and industry officials, scholars and critics, and the public alike repeatedly questioned the networks’ commitments to balancing mass‐audience entertainment appeals and “consensus” programming with more challenging, riskier “quality” and “enlightened” program address. Would television be a forum emphasizing continuity and the integration of past cultural forms, vernacular traditions, and values? Or, would it stand “above” popular culture (Ouellette 2002)? Could or should it do both?

      This passage talks about the challenges and conflicts that existed within the television industry during the classic network era. Despite being popular and reaching a wide audience, there were debates about the purpose of television and how to balance entertaining shows with more thought-provoking ones. The passage emphasizes the importance of understanding television as a cultural forum that reflects and influences our society's values and concerns. As viewers, it's important to be aware of the power dynamics that exist in the creation and distribution of television programming.

    1. Reviewer #3 (Public Review):

      The authors investigated whether reactivation of wake EEG patterns associated with left- and right-hand motor responses occurs in response to sound cues presented during REM sleep.

      The question of whether reactivation occurs during REM is of substantial practical and theoretical importance. While some rodent studies have found reactivation during REM, it has generally been more difficult to observe reactivation during REM than during NREM sleep in humans (with a few notable exceptions, e.g., Schonauer et al., 2017), and the nature and function of memory reactivation in REM sleep is much less well understood than the nature and function of reactivation in NREM sleep. Finding a procedure that yields clear reactivation in REM in response to sound cues would give researchers a new tool to explore these crucial questions.

      The main strength of the paper is that the core reactivation finding appears to be sound. This is an important contribution to the literature, for the reasons noted above.

      The main weakness of the paper is that the ancillary claims (about the nature of reactivation) may not be supported by the data.

      The claim that reactivation was mediated by high theta activity requires a significant difference in reactivation between trials with high theta power and trials with low theta, but this is not what the authors found (rather, they have a "difference of significances", where results were significant for high theta but not low theta). So, at present, the claim that theta activity is relevant is not adequately supported by the data.

      The authors claim that sleep replay was sometimes temporally compressed and sometimes dilated compared to wakeful experience, but I am not sure that the data show compression and dilation. Part of the issue is that the methods are not clear. For the compression/dilation analysis, what are the features that are going into the analysis? Are the feature vectors patterns of power coefficients across electrodes (or within single electrodes?) at a single time point? or raw data from multiple electrodes at a single time point? If the feature vectors are patterns of activity at a single time point, then I don't think it's possible to conclude anything about compression/dilation in time (in this case, the observed results could simply reflect autocorrelation in the time-point-specific feature vectors - if you have a pattern that is relatively stationary in time, then compressing or dilating it in the time dimension won't change it much). If the feature vectors are spatiotemporal patterns (i.e., the patterns being fed into the classifier reflect samples from multiple frequencies/electrodes / AND time points) then it might in principle be possible to look at compression, but here I just could not figure out what is going on.

      For the analyses relating to classification performance and behavior, the authors presently show that there is a significant correlation for the cued sequence but not for the other sequence. This is a "difference of significances" but not a significant difference. To justify the claim that the correlation is sequence-specific, the authors would have to run an analysis that directly compares the two sequences.

    1. Peer review report

      Title: If it’s there, could it be a bear?

      version: 2

      Referee: Julie Sheldon

      Institution: University of Tennessee

      email: jsheldo3@tennessee.edu

      ORCID iD: https://orcid.org/0000-0003-2813-3027


      General assessment

      This manuscript is a collection of statistical analyses attempting to show that sasquatch sightings correlate with black bear populations, and humans may be mistaking black bears for sasquatch.

      The author effectively introduces the topic, provides adequate background on sasquatch, but does not provide much on black bear populations, natural history, or human-bear interactions.

      The author performs several statistical tests to support the findings. I am not a statistician, but the tests seem valid. The data used for the statistical analyses, however, are not ideal. The resource (Hristienko and McDonald) provided for obtaining black bear populations was published in 2007 and the data was from 2001 via “subjective extrapolations” and “expert opinions”. Thus, this resource is outdated and suboptimal as black bear populations have changed over time. A more updated resource with more scientific methods in data collection would improve this manuscript since having as accurate as possible bear population estimates is very important for the goal of this study. The author notes this briefly in the limitations. If the human population and sasquatch sighting data matched up with the dates of bear population estimates, it would be more valid (just outdated), but there are no date ranges of human or sasquatch data provided in the manuscript.

      In the results, the maps of bigfoot sightings and black bear population do not appear to correlate visually, which downplays the value of the statistical analysis. The stats should support the visual data and vice versa if the study is sound. Perhaps more updated bear population data will improve this.

      The discussion is short and briefly brings up important points that can invalidate the study without much discussion or argument supporting the findings of this study.


      Essential revisions that are required to verify the manuscript

      I recommend the following to improve the manuscript enough to consider it valid:

      Date-match the bear population, human population, and bigfoot sightings to improve the validity of the data analysis. One way to do this is to use data from the same 10-year period only.

      Improve the sources of bear population information.

      Expand the discussion to include reasons and ideas the maps don’t line up like the statistical analyses do – ie bears in Florida and the southeast.


      Other suggestions to improve the manuscript

      I recommend provide some information on black bear population/natural history in the introduction – ie what sort of habitats do black bears live in. Consider the possibility that sasquatch sightings may correlate with a type of habitat (ie forest), which happen to also correlate with black bear habitat. This may support the idea that sasquatch sightings are bears, or that sasquatch also likes to live in similar habitats as bears.

      The author reports that black bears are not prominent in Florida; however, there are > 4,000 black bears bears in Florida, that are reportedly large, and it may be worth considering this as a reason for the concentration of sasquatch sightings in Florida as seen on the map. More accurate black bear data as discussed above may help improve this aspect. Experientially, there is also a high concentration of black bears in the southeastern US, where there is also a high concentration of humans and human-bear encounters. The author does not discuss this along with the number of sasquatch sightings in this region as seen on the map.


      Decision

      Verified with reservations: The content is academically sound but has shortcomings that could be improved by further studies and/or minor revisions.

    2. Peer review report

      Title: If it’s there, could it be a bear?

      version: 2

      Referee: Julie Sheldon

      Institution: University of Tennessee

      email: jsheldo3@tennessee.edu

      ORCID iD: https://orcid.org/0000-0003-2813-3027


      General assessment

      This manuscript is a collection of statistical analyses attempting to show that sasquatch sightings correlate with black bear populations, and humans may be mistaking black bears for sasquatch.

      The author effectively introduces the topic, provides adequate background on sasquatch, but does not provide much on black bear populations, natural history, or human-bear interactions.

      The author performs several statistical tests to support the findings. I am not a statistician, but the tests seem valid. The data used for the statistical analyses, however, are not ideal. The resource (Hristienko and McDonald) provided for obtaining black bear populations was published in 2007 and the data was from 2001 via “subjective extrapolations” and “expert opinions”. Thus, this resource is outdated and suboptimal as black bear populations have changed over time. A more updated resource with more scientific methods in data collection would improve this manuscript since having as accurate as possible bear population estimates is very important for the goal of this study. The author notes this briefly in the limitations. If the human population and sasquatch sighting data matched up with the dates of bear population estimates, it would be more valid (just outdated), but there are no date ranges of human or sasquatch data provided in the manuscript.

      In the results, the maps of bigfoot sightings and black bear population do not appear to correlate visually, which downplays the value of the statistical analysis. The stats should support the visual data and vice versa if the study is sound. Perhaps more updated bear population data will improve this.

      The discussion is short and briefly brings up important points that can invalidate the study without much discussion or argument supporting the findings of this study.


      Essential revisions that are required to verify the manuscript

      I recommend the following to improve the manuscript enough to consider it valid:

      Date-match the bear population, human population, and bigfoot sightings to improve the validity of the data analysis. One way to do this is to use data from the same 10-year period only.

      Improve the sources of bear population information.

      Expand the discussion to include reasons and ideas the maps don’t line up like the statistical analyses do – ie bears in Florida and the southeast.


      Other suggestions to improve the manuscript

      I recommend provide some information on black bear population/natural history in the introduction – ie what sort of habitats do black bears live in. Consider the possibility that sasquatch sightings may correlate with a type of habitat (ie forest), which happen to also correlate with black bear habitat. This may support the idea that sasquatch sightings are bears, or that sasquatch also likes to live in similar habitats as bears.

      The author reports that black bears are not prominent in Florida; however, there are > 4,000 black bears bears in Florida, that are reportedly large, and it may be worth considering this as a reason for the concentration of sasquatch sightings in Florida as seen on the map. More accurate black bear data as discussed above may help improve this aspect. Experientially, there is also a high concentration of black bears in the southeastern US, where there is also a high concentration of humans and human-bear encounters. The author does not discuss this along with the number of sasquatch sightings in this region as seen on the map.


      Decision

      Verified with reservations: The content is academically sound but has shortcomings that could be improved by further studies and/or minor revisions.

    1. subspaces of a normed spaceX (of any dimension)

      I just discovered that the subspaces in vector spaces are very different compare to metric spaces.

      1. A subspace of a metric space just have to be a metric space.
      2. A subspace of a vector space will still have to retain the vector space structure. But if it's viewed as a metric space, this doesn't have to be the case.

      Also take note that this is talking about any spaces of dimensions.

    Tags

    Annotators

    1. I want to punch him.  I’d dearly love to tell him that just because someone calls you a creeper, or creepy, doesn’t make you a bad person, but if every female friend you have is telling you that you make them uncomfortable then you are the fucking problem.

      It's interesting how all the women in the friend group are on the same page but this guy does not get it. He seems to be taking advantage of the friend group.

    1. The following bonuses should be included if at all possible

      I think the benefits will be very helpful, having an incentive to do well and work hard on their assignment will be really helpful in aiding in motivation. Although, I will say that I do think rewarding students for positive things is a little iffy for me, although I know it's pretty effective for the most part, but kids should know that these are things they should just be doing without getting rewarded for it, other than feeling accomplished. But, alas, everything is nuanced and has pros and cons.

    Annotators

    1. so the optionality value I get allows me to still access real value but I'm destroying real value in the process so I'm going to say well you're not destroying it because you're making Lumber well yes Lumber is actually radically less complex than a tree that 00:49:06 has less total types of value that it does so we're converting the self-organizing self-repairing complex world and into an increasingly simple or complicated fragile world that has less 00:49:18 types of value to less types of actors the tree has value of many different types to many different types of actors right so you can't just say well it's carbon sequestration but no it's it's 00:49:31 stabilizing topsoil it's yeah a million things biodiversity Etc

      tree optionailioty value more complex destroy real valuie

    1. solicit feedback from your users

      Imagine how beautiful the world of technology would be if more companies did this! Not only could accessibility concerns be addressed right away, but it would also allow users to have more of a say about the sites, tools, and apps that they frequent. I can't count how many times I've been forced to use a website for school (ex: Spire) without any say, and then just having to deal with the extremely outdated and challenging user interface. It would be so amazing if more tools, apps, and sites allowed us to provide feedback about what it's really like to use their product.

    1. Author Response

      Reviewer #2 (Public Review):

      The authors use data from 3 cross-sectional age-stratified serosurveys on Enterovirus D68 from England between 2006 and 2017 to examine the transmission dynamics of this pathogen in this setting. A key public health challenge on EV-D68 has been its implication in outbreaks of acute flaccid myelitis over the past decade, and past circulation patterns and population immunity to this pathogen are not yet well-understood. Towards this end, the authors develop and compare a suite of catalytic models as fitted to this dataset and incorporate different assumptions on how the force of infection varies over time and age. They find high overall EV-D68 seroprevalence as measured by neutralizing antibodies, and detect increased transmission during this time period as measured by the annual probability of infection and basic reproduction number. Interestingly, their data indicate very high seroprevalence in the youngest children (1 year-olds), and to accommodate this observation, the authors separate the force of infection in this age class from the other groups. They then reconstruct the historical patterns of EV-D68 circulation using their models and conclude that, while the serologic data suggest that transmissibility has increased between serosurvey rounds, additional factors not accounted for here (e.g., changes in pathogenicity) are likely necessary to explain the recent emergence of AFM outbreaks, particularly given the broader age-profile of reported AFM cases. The Discussion mentions important current unknowns on the biological interpretation of EV-D68 neutralizing antibody titers for protection against infection and disease. The analysis is rigorous and the conclusions are well-supported, but a few aspects of the work need to be clarified and extended, detailed below:

      1) Due to the lack of a clear single cut-point for seropositivity on this assay, the authors sensibly present results for two cut-points in the main text (1:16 and 1:64). While some differences that stem from using different cut-points are fully expected (i.e., seroprevalence being higher using the less stringent cut-point), differences that are less expected should be further discussed. For instance, it was not clear in Figure 2 why the annual probability of infection decreased after 2010 using the 1:64 cut-point, while it continued to increase using the 1:16 cut-point. It would also be helpful to explain why overall seroprevalence and R0 continue to increase over this time period using the 1:64 cut-point. Lastly, it would be useful to see the x-axis in Figure 4 extended to the start of the time period that FOI is estimated, with accompanying credible intervals.

      For the discussion on differences between the two cut-offs, please see response to essential comment 1.

      Extending the x-axis before 2006 in Figure 4 is not possible. Estimates of the overall seroprevalence at a year y require FOI estimates up until y-40. This implies the first estimates we can provide are for 2006.

      Credible intervals have been added to Figure 4.

      2) Additional context of EV-D68 in the study setting of England would be useful. While the Introduction does mention AFM cases "in the UK and elsewhere in Europe" (line 53), a summary of reported data on EV-D68/AFM in England prior to this study would provide important context. The Methods refers to "whether transmission had increased over time (before the first reported big outbreak of EV-D68 in the US in 2014)" (lines 133-134), rather than in this setting. It would be useful to summarize the viral genomic data from the region for additional context - particularly since the emergence of a viral clade is highlighted as a co-occurrence with the increased transmissibility detected in this analysis.

      We have added a figure (new Figure 1 – figure supplement 1) showing the annual number of EV-D68 detections reported by Public Health England from 2004 to 2020.

      We have also added the following text to the introduction: “Similarly, in the UK, reported EV-D68 virus detections also show a biennial pattern between 2014 and 2018 (Figure 1 – figure supplement 1).”

      We have also amended the sentence in the Methods.

      Finally, below is a screenshot of the nexstrain tree for EV-D68 based on the VP1 region and with tips representing sequences from the UK (light blue) and European countries in colour. There is a lot of mixing between sequences from different regions, indicating widespread transmission and small regional clustering. We have added the following text to the Discussion: “Reported EV-D68 outbreaks in 2014 and 2016 were due to clade B viruses, while the 2018 outbreaks were reported to be linked to both B3 and A2 clade viruses in the UK (10), France (32) and elsewhere.”

      Reviewer #3 (Public Review):

      In the proposed manuscript, the authors use cross-sectional seroprevalence data from blood samples that were tested for evidence of antibodies against D68 for the UK. Samples were collected at 3 time points from individuals of all ages. The authors then fit a suite of serocatalytic models to explain the changing level of seropositivity by age. From each model they estimate the force of infection and assess whether there have been changes in transmissibility over the study period. D68 is an important pathogen, especially due to its links with acute flaccid myelitis, and its transmission intensity remains poorly understood.

      Serocatalytic models appear to be appropriate here. I have a few comments.

      The biggest challenge to this project is the difficulty in assigning individuals as seronegative or seropositive. There is no clear bimodal distribution in titers that would allow obvious discrimination and apparently no good validation data with controls with known serostatus. The authors tackle this problem by presenting results to four different cut-points (1:16 to 1:128) - resulting in seropositivity ranging from around 50% to around 80%. They then run the serocatalytic models with two of these (1:16 and 1:64) - leading to a range of FoI values of 0.25-0.90 for the 1 year olds and 0.05-0.25 for older age groups (depending on model and cutpoint). This represents a substantial amount of variability. While I certainly see the benefit of attacking this uncertainty head on, it does ultimately limit the inferences that can be made about the underlying risk of infection in UK communities, except that it's very uncertain and possibly quite high.

      I find the force of infection in 1 year olds very high (with a suggestion that up to 75% get infected within a year) and difficult to believe, especially as the force of infection is assumed much lower for all other ages.

      The authors exclude all <1s due to maternal antibodies, which seems sensible, however, does this mean that it is impossible for <1s to become infected in the model? We know for other pathogens (e.g., dengue virus) with protection from maternal antibodies that the protection from infection is gone after a few months. Maybe allowing for infections in the first year of life too would reduce the very large, and difficult to believe, difference in risk between 1 year olds and older age groups. I suspect you wouldn't need to rely on <1 serodata - just allow for infections in this time period.

      Relatedly, would it be possible to break the age data into months rather than years in these infants to help tease apart what happens in the critical early stages of life.

      Yes. We have added two figures (new Figures 1C and 1D) showing the prevalence of antibodies in children <1 yo. We show these data for the three serosurveys combined, because the number of individuals per month of age is very small.

      One of the major findings of the paper is that there is a steadily increasing R0. This again is difficult to understand. It would suggest there are either year on year increases in inherent transmissibility of the virus through fitness changes, or year on year increases in the mixing of the population. It would be useful for the authors to discuss potential explanations for an inferred gradual increase in R0.

      We have removed the estimates of R0 from the manuscript.

      On a similar note, I struggle to reconcile evidence of a stable or even small drop in FoI in the 1:64 models 4 and 5 from 2010/11 (Figure 3) with steadily increasing R0 in this period (Figure 4). Is this due to changes in the susceptibility proportion. It would be good to understand if there are important assumptions in the Farrington approach that may also contribute to this discrepancy.

      We have removed the estimates of R0 from the manuscript and only present the reconstruction of the annual number of new infections per age class and year (new Figure 5). We think this measure is more adapted to the discussion of the results.

      In addition, when using the classical expression R{0t}=1/(1-S(t)), with S(t) the annual proportion seropositive, the high seroprevalence estimates (new Figure 4) result in extremely high estimates of the basic reproduction number (median ranges: 11.6 – 29.7 for 1:16 and 3.3 – 7.6 for 1:64 during the period 2006 to 2017).

      We had previously used the Farrington approach as it is adapted to cases when the force of infections is different for different age classes.

      The R0 estimates (Figure 4) should also be presented with uncertainty.

      R0 no longer presented, but estimates of overall seroprevalence now presented with uncertainty.

      Finally, given the substantial uncertainty in the assay, it seems optimistic to attempt to fit annual force of infections in the 30 year period prior to the start of the sampling periods. I would be tempted to include a constant lambda prior to the dates of the first study across the models considered.

      We thank the reviewers for the suggestion.

      We implemented this change (constant FOI before 2006) in the previous models without maternal antibodies and the result for the random-walk-based models was that the variance of the random walk was estimated over a very short period, thus resulting in a rather non- smoothed FOI.

      Implementing this change with the new models with maternal antibodies and random-walk on the FOI was technically a bit complex. We therefore kept the simple random-walk over the whole period and added the following paragraph to the Discussion:

      “It is important to interpret well the results for the estimates of the FOI over time from our analysis under the assumptions of the models. First, as the best model uses a random walk on the FOI, the change in transmission that we infer happens continuously over several years. In reality, this may have occurred differently (e.g. in a shorter period of time). Our ability to recover more complex changes in transmission is limited by the data available. It would not be surprising if EV-D68 has exhibited biennial (or longer) cycles of transmission in England over the last few years, as it has been shown in the US (7) and is common for other enteroviruses (30). However, it is difficult to recover changes at this finer time scale with serology data unless sampling is very frequent (at least annual). Therefore, our study can only reveal broader long-term secular changes. Second, interpretation of the results before 2006 must be avoided for two resasons. On the one hand, as we go backwards in time, there is more uncertaintly about the time of seroconversion of the individuals informing the estimates of the FOI. On the other hand, because age and time are confounded in cross-sectional seroprevalence measurements, the random walk on time may account for possible differences in the FOI through age (possibly higher in the youngest age classes, and lowest in the oldest), which are note explicitly accounted for here. This may explain the decline in FOI when going backwards in time before the first cross-sectional study in 2006.”

  9. Mar 2023
    1. However, some students are more accustomed to studying for exams by memorizing information rather than understanding it. (It's not their fault; that's what they were asked to do in the past)

      I always appreciate when professors make us interact with the material more than just memorizing vocabulary words. I’ve definitely noticed with classes that require problem-solving strategies rather than pure memorization on homework and exams, I always finish the term feeling like I actually learned/remembered things. When tests require straight memorization I often forget most of the material shortly after the exam is over.

    1. The word Fascism has now no meaning except in so far as it signifies ‘something not desirable

      This is definitely true. If I'm being honest, I don't fully know what fascism is, I just know that it's bad.

    1. Reviewer #3 (Public Review):

      In the proposed manuscript, the authors use cross-sectional seroprevalence data from blood samples that were tested for evidence of antibodies against D68 for the UK. Samples were collected at 3 time points from individuals of all ages. The authors then fit a suite of serocatalytic models to explain the changing level of seropositivity by age. From each model they estimate the force of infection and assess whether there have been changes in transmissibility over the study period. D68 is an important pathogen, especially due to its links with acute flaccid myelitis, and its transmission intensity remains poorly understood. Serocatalytic models appear to be appropriate here. I have a few comments.

      The biggest challenge to this project is the difficulty in assigning individuals as seronegative or seropositive. There is no clear bimodal distribution in titers that would allow obvious discrimination and apparently no good validation data with controls with known serostatus. The authors tackle this problem by presenting results to four different cut-points (1:16 to 1:128) - resulting in seropositivity ranging from around 50% to around 80%. They then run the serocatalytic models with two of these (1:16 and 1:64) - leading to a range of FoI values of 0.25-0.90 for the 1 year olds and 0.05-0.25 for older age groups (depending on model and cutpoint). This represents a substantial amount of variability. While I certainly see the benefit of attacking this uncertainty head on, it does ultimately limit the inferences that can be made about the underlying risk of infection in UK communities, except that it's very uncertain and possibly quite high.

      I find the force of infection in 1 year olds very high (with a suggestion that up to 75% get infected within a year) and difficult to believe, especially as the force of infection is assumed much lower for all other ages.

      The authors exclude all <1s due to maternal antibodies, which seems sensible, however, does this mean that it is impossible for <1s to become infected in the model? We know for other pathogens (e.g., dengue virus) with protection from maternal antibodies that the protection from infection is gone after a few months. Maybe allowing for infections in the first year of life too would reduce the very large, and difficult to believe, difference in risk between 1 year olds and older age groups. I suspect you wouldn't need to rely on <1 serodata - just allow for infections in this time period.

      Relatedly, would it be possible to break the age data into months rather than years in these infants to help tease apart what happens in the critical early stages of life.

      One of the major findings of the paper is that there is a steadily increasing R0. This again is difficult to understand. It would suggest there are either year on year increases in inherent transmissibility of the virus through fitness changes, or year on year increases in the mixing of the population. It would be useful for the authors to discuss potential explanations for an inferred gradual increase in R0.

      On a similar note, I struggle to reconcile evidence of a stable or even small drop in FoI in the 1:64 models 4 and 5 from 2010/11 (Figure 3) with steadily increasing R0 in this period (Figure 4). Is this due to changes in the susceptibility proportion. It would be good to understand if there are important assumptions in the Farrington approach that may also contribute to this discrepancy.

      The R0 estimates (Figure 4) should also be presented with uncertainty.

      Finally, given the substantial uncertainty in the assay, it seems optimistic to attempt to fit annual force of infections in the 30 year period prior to the start of the sampling periods. I would be tempted to include a constant lambda prior to the dates of the first study across the models considered.

    1. Note: This rebuttal was posted by the corresponding author to Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      General comments:

      We thank the reviewers for recognizing the importance of our work and for their supportive and insightful comments.

      Our planned revisions focus on addressing all the comments and especially in further elucidating the molecular mechanism underpinning our observations, their consequences for cell phenotypes and reproducing our observations in an additional cell line. Our revision plan is backed up in many cases by preliminary data.

      Our submitted manuscript demonstrated that DNMT3B’s recruitment to H3K9me3-marked heterochromatin was mediated by the N-terminal region of DNMT3B. Data generated since submission suggest that DNMT3B binds indirectly to H3K9me3 nucleosomes through an interaction mediated by a putative HP1 motif in its N-terminal region.

      Specifically, we have found that DNMT3B can pull down HP1a and H3K9me3 from cell extracts and that this interaction is abrogated when we remove the N-terminal region of DNMT3B (revision plan, figure 1a). Using purified proteins in vitro, we have shown binding of DNMT3B to HP1a that is dependent on the presence of DNMT3B’s N-terminus suggesting that the interaction with HP1a is direct and that this mediates DNMT3B’s recruitment to H3K9me3 (revision plan, figure 1b). Alphafold multimer modelling identified that DNMT3B's N-terminus binds the interface of a HP1 dimeric chromoshadow domain through a putative HP1 motif. Two point mutations in this motif ablate DNMT3B’s interaction with HP1a in vitro (revision plan, figure 1b - DNMT3B L166S I168N).

      We propose to further characterize DNMT3B’s interaction with HP1a in vitro and determine the significance of these observations in cells by microscopy in a revised manuscript. Together with the other proposed experiments and analyses, we believe the extra detail regarding the molecular mechanisms through which DNMT3B is recruited to H3K9me3 heterochromatin will help address the reviewer’s comments.

      Point by point response:

      We have reproduced the reviewer’s comments in their entirety and highlighted them in blue italics.

      Reviewer #1 (Evidence, reproducibility and clarity (Required)): Summary:

      This paper by Francesca Taglini, Duncan Sproul, and their coworkers, examines the mechanisms of DNA methylation in a human cancer cell line. They use the human colorectal cancer line HCT116, which has been very widely used to look at epigenetics in cancer, and to dissect the contribution of different proteins and chromatin marks to DNA methylation.

      The authors focus on the role of the de novo methyltransferase DNMT3B. It has been shown in ES cells in 2015 that its PWWP domain directs it to H3K36me3, typically found in gene bodies. More recently, the authors showed similar conclusions in colorectal cancer (Masalmeh Nat Comm 2021). Here they examine, more specifically, the role of the PWWP. The conclusions are described below.

      Major comments:

      • *1-I feel that this paper has several messages that are somewhat muddled. The main message, as expressed in the title and in the model, is that the PWWP domain of DNMT3B actively drags the protein to H3K36me3-marked regions. Inactivation of this domain by a point mutation, or removal of the Nter altogether, causes DNMT3B to relocate to other genomic regions that are H3K9me3-rich, and that see their DNA methylation increase in the mutant conditions. This first message is clear.

      We thank the reviewer for their positive comments on our observations. However, we note that our results suggest that removal of the N-terminal region has a different effect to point mutations in the PWWP domain. The data we present suggest that the N-terminus facilitates recruitment to H3K9me3 regions.

      The second message has to do with ICF. A mutant form of DNMT3B bearing a mutation found in ICF, S270P, is actually unstable and, therefore, does not go to H3K9me3 regions. I feel that here the authors go on a tangent that distracts from message #1. This could be moved to the supp data. At any rate, HCT116 are not a good model for ICF. In addition, a previous paper has looked at the S270P mutant, and it did not seem unstable in their hands (Park J Mol Med 2008, PMID: 18762900). So I really feel the authors do not do themselves a favor with this ICF angle.

      While we agree with the reviewer that HCT116 cells as a cancer cell line are not a good model for ICF1 syndrome, our observation that S270P destabilizes DNMT3B is important to consider in the context of this disease. In addition, the S270P mutant was reported to abrogate the interaction between DNMT3B and H3K36me3 (Baubec et al 2015 Nature PMID: 25607372) making it important to compare it to the other mutations we examine. In our revised version of the manuscript, we propose to move these data to the supplementary materials and add a statement to the discussion noting the caveat that HCT116 cells are likely not to model many aspects of ICF1.

      With regard to the differences between our results and that of Park et al, we note that stability of the S270P mutant was not assessed in that study whereas we directly assess stability in vitro and in cells. We propose to add discussion of this previous study to the revised manuscript.

      2-I feel that some major confounders exist that endanger the conclusions of the work. The most worrisome one, in my opinion, is the amount of WT or mutant DNMT3B in the cells. It is clear in figure 4C that the WT rescue construct is expressed much more than the W263A mutant (around 3 or 4 times more). Unless I am mistaken, we are never shown how the level of exogenous rescue protein compares to the level of DNMT3B in WT cells. This bothers me a lot. If the level is too low, we may have partial rescue. If it is too high, we might have artifactual effects of all that DNMT3B. I would also like to see the absolute DNA methylation values (determined by WGBS) compared to the value found in WT. From figure S1A, it looks like WT is aroun 80% methylation, and 3BKO is around 77% or so. I wonder if the rescue lines may actually have more methylation than WT?

      The rescue cell lines do express DNMT3B to a greater level than observed endogenously. In our manuscript we controlled for this effect by generating the knock-in W263A cells and, as reported in the manuscript, we observe similar effects to the rescue cells (manuscript, figure 2d) suggesting that our observations are not driven by the overexpression.

      We also expressed ectopic DNMT3B from a weaker promoter (EF1a) in DNMT3B KO cells but did not include these data in the submitted manuscript. We have previously shown that this promoter expresses DNMT3B at lower levels than the CAG promoter used in the submitted manuscript (Masalmeh et al 2021 Nature Communications PMID: 33514701). Bisulfite PCR of representative non-repetitive loci within heterochromatic H3K9me3 domains show that we observe similar gains of methylation with DNMT3BW263A (revision plan, figure 2).

      Revision plan figure 2. Expression of DNMT3BW236A from a weaker promoter leads to increased DNA methylation at selected H3K9me3 loci. Barplot of mean methylation by BS-PCR at H3K9me3 loci alongside the H3K4me3-marked BRCA2 promoter in DNMT3B mutant cells were DNMT3B is expressed from the EF1 promoter. P-values are from two-sided Wilcoxon rank sum tests.

      To reinforce that our conclusions are not solely a result of the level of DNMT3B expression, we propose to include these data in the revised manuscript.

      The reviewer is also correct that by WGBS, the rescue cell lines have higher levels of overall DNA methylation than HCT116 cells. We will note this in revised manuscript and include HCT116 cells in a revised version of Figure S1e.

      3-I guess the unarticulated assumption is that the gain of DNA methylation seen at H3K9me3 region upon expression of a mutant DNMT3B is due to DNMT3B itself. But we do not know this for sure, unless the authors test a double mutant (PWWP inactive, no catalytic activity). I am not necessarily asking that they do it, but minimally they should mention this caveat.

      The hypothesis that the gains in DNA methylation at H3K9me3 loci result from the direct catalytic activity of DNMT3B is supported by our observation that a catalytic dead DNMT3B does not remethylate heterochromatin (manuscript, figures 1d and e). However, we acknowledge that we have not formally shown that the additional DNA methylation seen with DNMT3BW263A are a direct result of its catalytic activity. We will conduct an analysis of the effect of catalytically dead DNMT3BW263A on DNA methylation at Satellite II and selected H3K9me3 loci and include this in the revised manuscript.

      4-I am confused as to why the authors look at different genomic regions in different figures. In figure 1 we are looking at a portion of the "left" arm of chr 16. But in figure 2B, we now look at a portion of the "right" arm of the same chromosome, which has a large 8-Mb block of H3K9me3, and is surprisingly lowly methylated in the 3BKO. This seems quite odd, and I wonder if there is a possible artifact, for instance mapping bias, deletion, or amplification in HCT116. Showing the coverage along with the methylation values would eliminate some of these concerns.

      By choosing different regions of the genome for different figures, we intended to reassure the reader that our results were not specific to any one region of the genome. In the revised manuscript, we propose to display a consistent genomic region between these figures.

      With regard to the low levels of DNA methylation in H3K9me3 domains in DNMT3B KO cells, H3K9me3 domains are partially methylated domains which have reduced methylation in HCT116 cells (see page 5 of the manuscript):

      … we found that hidden Markov model defined H3K9me3 domains significantly overlapped with extended domains of overall reduced methylation termed partially methylated domains (PMDs) defined in our HCT116 WGBS (Jaccard=0.575,p=1.07x10-6, Fisher’s test).

      These domains lose further DNA methylation in DNMT3B KO cells leading to the low methylation level noted by the reviewer. The methylation percentages calculated from WGBS are based on the ratio of methylated to total reads. Thus, a lack of coverage generates errors from division by zero rather than the low values observed in this domain in DNMT3B KO cells.

      We include a modified version of figure 2b from the manuscript below. This includes coverage for the 3 cell lines (revision plan, figure 3). Although WGBS coverage is slightly reduced in H3K9me3 domains, reads are still present and overall coverage equal between different cell lines.

      While we could potentially include the coverage tracks in revised versions of figures, we note that doing so for multiple cell lines would make these figures extensively cluttered and it would likely be difficult to observe the differences in DNA methylation in these figure panels due to shrinkage of the other tracks.

      Minor comments:

      1-The WGBS coverage is not very high, around 2.5X on average, occasionally 2X. I don't believe this affects the findings, as the authors look at large H3K9me3 regions. But the info in table S2 was hard to find and it is important. I would make it more accessible.

      In the revised manuscript we will specify the mean coverage in the text to ensure this is clearer.

      2-It would be nice to have a drawing showing exactly what part of the Nter was removed.

      We will add this in the figure in the revised manuscript.

      3-some figures could be clearer. I was not always sure when we were looking at a CRISPR mutant clone (W263A) versus a piggyBac rescue.

      In the revised manuscript we will clarify in the figure labels to ensure it is clear which data were generated using CRISPR clones.

      4-unless I am mistaken, all the ChIP-seq data (H3K9me3, H3K36me3 etc) come from WT cells. It is not 100% certain that they remain the same in the 3BKO, is it? This should be discussed.

      We performed ChIP-seq on both HCT116 and 3BKO cell lines and used ChIP-seq data from the 3BKO cell line for the rescue experiments where DNMT3Bs were expressed in 3BKO cells. We will ensure this is clearer in the revised version.

      Reviewer #1 (Significance (Required)):

      Strengths:

      The experiments are for the most part well done and well interpreted (save for the limitations mentioned above). The techniques are appropriate and well mastered by the team. The paper is well written, the figures are nice. The authors know the field well, which translates into a good intro and a good discussion. The bioinformatics are convincing.

      Limitations:

      All the work is done in a single cancer cell line. One might assume the conclusions will hold in other systems, but there is no certainty at this point.

      We acknowledge this limitation. To demonstrate that our results are applicable beyond HCT116 cells, we will include analysis of experiments on an independent cell line in the revised manuscript.

      HCT116 are not the best model system to study ICF, which mostly affects lymphocytes

      At present, I feel that the biological relevance of the findings is fairly unclear. The authors report what happens when DNMT3B has no functional PWWP domain. I am convinced by their conclusions, but what do they tell us, biologically? Are there, for instance, mutant forms of DNMT3B expressed in disease that have a mutant PWWP? Are there isoforms expressed during development or in certain cell types that do not have a PWWP? In these cell types, does the distribution of DNA methylation agree with what the authors predict?

      As stated in response to point 1, although we acknowledge the limitations of HCT116 cells as a model of ICF, we believe are finding that the S270P mutation results in unstable DNMT3B are still important to consider for ICF syndrome.

      We are not aware of reports of mutations affecting the residues of DNMT3B’s PWWP domain we have studied. Our preliminary analysis suggests that although mutations in DNMT3B’s PWWP domain are frequent, residues in the aromatic cage such as W263 and D266 are absent from the gnomAD catalogue (Karczewski et al 2020 Nature, PMID: 32461654). This suggests that they are incompatible with healthy human development.

      A number of different DNMT3B splice isoforms have been reported. These include DDNMT3B4 which lacks the PWWP domain and a portion of the N-terminal region (Wang et al 2006 International Journal of Oncology, PMID: 16773201). DDNMT3B4 is proposed to be expressed in non-small cell lung cancer (Wang et al 2006 Cancer Research, PMID: 16951144).

      We will include analysis of gnomAD and discussion of these points in the revised manuscript.

      In its present state, I feel the appeal of the findings is towards a semi-specialized audience, that is interested in aberrant DNA methylation in cancer and other diseases. This is not a small audience, by the way.

      We thank the reviewer for their comments and the suggestion that our findings are of interest to a cross-section of researchers.

      Reviewer #2 (Evidence, reproducibility and clarity (Required)):

      Note, we have added numbers to the comments made by reviewer 2 to aid cross-referencing.

      In this manuscript, Taglini et al., describe an increased activity of DNMT3B at H3K9me3-marked regions in HCT cells. They first identify that DNA methylation at K9me3-marked regions is strongly reduced in absence of DNMT3B. Next, the authors re-express DNMT3B and DNMT3B mutant variants in the DNMT3B-KO HCT cells and assess DNA methylation by WGBS where they identify a strong preference for re-methylation of K9me3 sites. Based on genome-wide binding maps for DNMT3B, including the mutant variants, they address how the localization of DNMT3B relates to the observed changes in methylation.

      Major points:

      • The authors show increased reduction of mCG at H3K9me3 (and K27me3) sites in absence of DNMT3B. This is based on correlating delta %mCG with histone modifications in 2kb bins. I find this approach to not fully support the major claim. First, the correlation coefficients are very small -0.124 for K9me3 and -0.175 for K27me3, and just marginally better compared to, for example, K36me3 that does not seem to have any influence on mCG according to Sup Fig S1b. While I agree that mCG seems more reduced at K9me3 in absence of DNMT3B (e.g. in Fig 1a), is there a better way to visualize the global effect? The delta mCG Boxplots based on bins are not ideal (this applies to many figures using the same approach in the current manuscript).*

      Our choice to examine the global effects using correlations in windows across the genome was motivated by similar previous analyses in other studies (for example: Baubec et al. 2015 Nature PMID 25607372, Weinberg at al 2021 Nature PMID:33986537, Neri et al 2017 Nature PMID: 28225755). These global analyses result in modest correlation coefficients because the vast majority of genomic windows are negative for a given mark. For this reason, we included specific analyses of H3K36me3, H3K9me3 and H3K27me3 domains in the manuscript (eg manuscript figure 1b, c and d) which reinforce the conclusions drawn from our global analyses.

      However, we acknowledge that while our data support a specific activity at H3K9me3 marked heterochromatin, these are not the only changes in DNMT3B KO cells as DNMTs are promiscuous enzymes that are localized to multiple genomic regions. We will add discussion of this point to the revised manuscript.

      2. Second, the calculation based on delta mCpG does not allow to see how much methylation was initially there. For example, S1b shows a median decrease of ~ 10% in K9me3 and ~7-8% in H3K4me3. What does this mean given that the starting methylation for both marks is completely different?

      Following this point, the authors mention that mCG is already low at K9me3 domains in HCT cells (compared to other sites in the genome). I am curious if this may influence the accelerated loss of methylation in absence of DNMT3B? Any comments on this?

      The observation that there is a greater loss at H3K9me3 domains than H3K27me3-only domains which also have low DNA methylation levels in HCT116 argue that the losses are not solely driven by the lower initial level of methylation in H3K9me3 domains. Our analyses later in the manuscript also support a specific activity at H3K9me3. In addition, we propose to reinforce this point through further data on exploring how DNMT3B interacts with HP1a (see general comments, revision plan figure 1).

      However, we acknowledge the possibility that part of the loss seen at H3K9me3 domains in DNMT3B KO cells could be in part a result of their low initial level of methylation. In the revised manuscript we propose to include discussion of this possibility.

      3. One issue is the lack of correlation in DNMT3B binding to H3K9me3 sites in WT cells (Fig 3). How does this explain the requirement for DNMT3B for maintenance of methylation at H3K9me3? While some of the tested mutants show some weak increase at K9me3 sites, these are not comparable to the strong binding preferences observed at K36me3 for the wt or delta N- term version.

      Using ChIP-seq we cannot say that DNMT3BWT does not bind at H3K9me3, only that it binds here to a lower level than at K36me3-marked loci. The normalized DNMT3BWT signal at H3K9me3 domains is higher than the background signal from DNMT3B KO cells (manuscript figure 3d) supporting the hypothesis that DNMT3BWT localizes to H3K9me3. This hypothesis is also supported by the observation that the correlation between DNMT3BDN and H3K9me3 is reduced compared to that of DNMT3BWT (manuscript figure 6c compared to figure 3c).

      There are several reasons why the apparent enrichment of DNMT3B at H3K9me3 may appear weaker than at H3K36me3 by ChIP-seq. Previous work has also suggested that formaldehyde crosslinking fails to capture transient interactions in cells (Schmiedeberg et al. 2009 PLoS One PMID: 19247482). H3K9me3-marked heterochromatin is also resistant to sonication (Becker et al. 2017 Molecular Cell PMID: 29272703) and this could further affect our ability to detect DNMT3B in these regions using ChIP-seq. Our new data also suggest that DNMT3B binds to H3K9me3 indirectly through HP1a (see general comments, revision plan figure 1) and this may also lead to weaker ChIP-seq enrichment at H3K9me3 compared to the direct interaction with H3K36me3 through DNMT3B’s PWWP domain.

      We propose to add discussion of these issues to the revised manuscript.

      4. Following the above comment, what about other methyltransferases in HCT cells? Could DNMT1 or DNMT3A function be altered in absence of DNMT3B, and the observed methylation changes could be indirectly influenced by DNMT3B? The authors could create a DNMT-TKO HCT cell line and re-introduce DNMT3B in this background and measure methylation to exclude that DNMT1 or DNT3A could have an influence. In this case, only H3K9me3 should gain DNA methylation.

      As discussed in response to reviewer 1 (point 3), we propose to examine the changes in DNA methylation upon expression of catalytically dead DNMT3BW263A to further strengthen the evidence that DNMT3BW263A is directly responsible for the increased DNA methylation at H3K9me3-marked loci.

      5. DNMT3B lacking N-terminal shows reduced K9me3 methylation & some localization by imaging. While the presented experiments show some support for this conclusion, I suggest to re-introduce a W263A mutant lacking the N-terminal part and measure changes in DNA methylation at H3K9. This should help to test the requirement for the N-terminal regions and further indicate which protein part (PWWP or N-term) is more important in regulating the balance between K9me3 and K36me3.

      We have performed this experiment and the data are shown in manuscript figure S6c and d. The results of these experiments show that DNMT3BΔN+W263A cells showed less methylation at H3K9me3 loci than DNMT3BW263A cells, supporting a role for the N-terminus in recruiting DNMT3B to H3K9me3-marked heterochromatin. In the revised version, we will ensure that these data are more clearly indicated.

      In the first paragraph of the discussion, the authors state: "Our results demonstrate that DNMT3B is recruited to and methylates heterochromatin in a PWWP- independent manner that is facilitated by its N-terminal region." Same statement is found in the abstract. This contradicts the ChIP-seq results that do not indicate a recruitment of DNMT3B to heterochromatin, and the N- terminal deletions are not fully supporting a role in this targeting since there is no localization to K9me3 to begin with. While changes in methylation are observed, it remains to be determined if this is indeed through direct DNMT3B delocalization or indirectly through influencing the remaining DNMTs.

      As discussed above, there are several potential reasons why DNMT3B ChIP-seq signal at H3K9me3 is weak (reviewer 2, point 3). The additional experiments we propose to include in the revised manuscript could reinforce this statement by clarifying whether DNMT3B is directly responsible for methylating H3K9me3-marked regions (reviewer 1, point 3) and by delineating the role of the putative HP1a motif in DNMT3B’s N-terminal region (general comments, revision plan figure 1).

      Reviewer #2 (Significance (Required)):

      Advance: detailed analysis of DNMT3B mutants in relation to K9me3. Builds up on previous studies. Audience: specialised audience

      We thank the reviewer for their insights.

      Reviewer #3 (Evidence, reproducibility and clarity (Required)):

      • *In this work, Taglini et al. examine how the de novo DNA methyltransferase DNMT3B localizes to constitutive heterochromatin marked by the repressive histone modification H3K9me3. The authors utilize a previously generated DNMT3B KO colorectal carcinoma cell line, HCT116 to study recruitment and activity of DNMT3B at constitutive H3K9me3 heterochromatin. The authors noted preferential decrease of DNA methylation (DNAme) at regions of the genome marked with H3K9me3 in DNMT3B KOs. The authors then rescued the deficiency through overexpression of WT and catalytic dead DNMT3A/B and confirmed that DNA methylation increase at H3K9me3+ region in the WT DNMT3B, but not catalytically inactive mutant nor DNMT3A. To examine which protein domains may be mediating DNMT3B's recruitment to H3K9me3 regions, the authors designed a series of mutants, primarily focusing on the PWWP domain which normally recognizes H3K36me3. In the PWWP mutants, DNMT3B binding to the genome is altered, showing depletion at some H3K36me3-marked regions and gain at H3K9me3 heterochromatin, which coincides with DNAme increase at satellites. In contrast, the clinically relevant ICF1 mutation S270P, shows DNMT3B protein destabilization and no such loss of DNAme at heterochromatin. Finally, the authors truncate the N-terminal portion of DNMT3B, and saw that this region of the protein is necessary for heterochromatin localization and subsequent DNAme of H3K9me3+ regions.

      The experiments are well done with extensive controls, and the results are interesting and convincing. The structure of the manuscript could be improved for clarity and flow - for example, the PWWP mutations and truncations should be mentioned and compared together. I also found the section on ICF1 mutant to be out-of-place.

      As described above (reviewer 1, point 1), we propose to move these data to the supplementary materials in the revised manuscript.

      • *

      More emphasis should be placed on the N- terminal mutant as this region seems to be critical to heterochromatin recruitment, and this may address whether the interaction to H3K9me3 is direct or indirect.

      As described above (general comments), the revised manuscript will include experiments clarifying the nature of DNMT3B’s interaction with H3K9me3. Our preliminary data support that it is an indirect interaction mediated through HP1a (revision plan figure 1).

      Finally, while the epigenetic crosstalk is well-examined in this work, I would strongly urge the authors to add RNA-seq data to determine the transcriptional consequence of such chromatin disruptions (e.g. are repetitive sequences up-regulated in DNMT3B KOs?).

      As suggested by the reviewer, we propose to generate and analyse RNA-seq data in the revised manuscript to understand the impact of DNMT3B on transcriptional programs.

      Comments

      1. A potential caveat to the study is the use of a single cell line - colorectal cancer cell HCT116 - to draw major conclusions on the function of DNMT3B. It is worth noting the Baubec et al. study examining DNMT3B recruitment to H3K36me3 was mainly performed in murine embryonic stem cells (mESCs). It would greatly strengthen the study if the authors could perform similar type of data analysis on an independent DNMT3B KO cell line. For example, does DNMT3B localize to H3K9me3 regions in WT mESCs?

      As described above in response to reviewer 1, we will include analysis in an additional cell line in the revised manuscript to demonstrate that our results are generalizable beyond HCT116 cells.

      2. Did the PWWP mutant W263A show the expected loss of DNAme at H3K36me3-marked regions? In other words, was there evidence of DNAme redistribution in loss at H3K36me3+ regions and inappropriate gain at H3K9me3+ regions? Please perform intersection analysis of DMRs with other epigenomic marks (e.g. H3K27me3, H3K36me3, CpG shores) in the PWWP mutants.

      Our analysis of DNMT3B KO cells (manuscript figure s1d) show that losses of DNA methylation in these cells are not correlated with H3K36me3 in gene bodies suggesting that DNMT3A and DNMT1 are sufficient to compensate in maintaining their methylation in DNMT3B KO cells. To clarify this point for the DNMT3BW263A knock in clones, in the revised manuscript we will directly examine whether these cells show loss of methylation at H3K36me3 marked gene bodies in a similar analysis and add discussion of these results.

      The study would also be strengthen greatly with the in addition of biochemical studies to confirm direct loss of binding, and possibly gain of H3K9me3 binding, in the DNMT3B PWWP mutants.

      As detailed above (general comments, revision plan figure 1), our data suggest that DNMT3B interacts indirectly with H3K9me3 through an HP1 motif in its N-terminal region. We will undertake further biochemical studies on this interaction which will be included in the revised manuscript. Specifically we will focus on using EMSAs with synthetic nucleosomes to clarify the degree to which the HP1a interaction is responsible for binding of DNMT3B to H3K9me3 modified nucleosomes.

      We also propose to undertake in vitro biochemical characterization of the effect of DNMT3B PWWP mutations on interaction with H3K36me3 using synthetic nucleosomes. However, we note that in the manuscript we have shown similar effects using two independent point mutations that are predicted to affect H3K36me3 binding (W263A and D266A) and deletion of the entire PWWP domain.

      3. Examining the tracks in Figure 3A,B, the PWWP mutants showed almost indiscriminate increase across the genome, and not specifically to H3K9me3-marked regions. Would ask the authors to speculate as to why the ChIP-seq of DNMT3B mutants do not recapitulate the heterochromatin co-localization shown by immunofluorescence.

      As discussed in response to reviewer 2 (point 3) we believe that the weak DNMT3B ChIP-seq signal at H3K9me3 loci is likely due to the nature of the interaction that DNMT3B has with chromatin in these regions. We will add discussion of these points to the revised manuscript.

      4. It's a shame that the ICF1 mutation S270P was not characterized to the same extent as the PWWP mutants. Would consider adding WGBS for this clinically relevant mutation.

      We have shown that this mutant does not produce stable protein in vitro or in our cells and we observe little difference in DNA methylation at selected loci. As WGBS is expensive, we believe that carrying out this experiment is not an efficient use of limited research resources.

      5. Figure 7 - please draw in the ICF1 and the N-terminal mutations in the model figure. Also provide legends.

      We will modify the manuscript to include these details in the revised manuscript.

      Reviewer #3 (Significance (Required)):

      This is an intersting study on a timely subject. It will be of interest to multiple fields from epigenetics to development and cancer. My expertise is in cancer, epigenetics, development.

      We thank the reviewer for highlighting the broad interest of our study.

    1. This was used as a way to devalue time spent on social media sites, and to dismiss harms that occurred on them.

      Although it can be argued that the online world is different from in real life, the people using these platforms are real so the effects are just as real. There will be times where the texts are posted as a joke (usually it's not a joke if it has to be claimed as a joke) but the fact that it's typed and posted suggests that the user had some strong feelings about whatever issue it was. In short, it does matter to both themselves and potentially those who are impacted by those words.

    1. Note: This rebuttal was posted by the corresponding author to Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      Reviewer #1 (Evidence, reproducibility and clarity (Required)):


      Summary:

      In this manuscript, Roberts et al. hypothesised that the 5:2 diet (a popular form of IF, a dietary strategy within the Intermittent fasting that is thought to increase adult hippocampal neurogenesis - AHN) would enhance AHN in a ghrelin-dependent manner. To do this, the Authors used immunohistochemistry to quantify new adult-born neurons and new neural stem cells in the hippocampal dentate gyrus of adolescent and adult wild-type mice and mice lacking the ghrelin receptor, following six weeks on a 5:2 diet. They report an age-related decline in neurogenic processes and identify a novel role for ghrelin-receptor in regulating the formation of new adult

      born neural stem cells in an age-dependent manner. However, the 5:2 diet did not affect new neuron or neural stem cell formation in the dentate gyrus, nor did alter performance on a spatial learning and memory task. They conclude that the 5:2 diet used in their study does not increase AHN or improve associated spatial memory function.

      Major comments:

      One criticism might be the fact that many aspects are addressed at the same time. For instance it is not fully clear the role of ghrelin with respect to testing the DR effects on AHN. Although the link between ghrelin, CR and AHN is explained by citing several previous studies, it is difficult to identify the main focus of the study. Maybe this is due to the fact that the Authors analyse and comment throughout the paper the different experimental approaches used by different

      Authors to study effect of DR to AHN. This is not bad in principle, since I think the Authors have a deep knowledge of this complex matter, but all this results in a difficulty to follow the flow of the rationale in the manuscript.

      We appreciate the reviewer’s critique regarding the rationale of the studies presented in the manuscript.

      The role of ghrelin in the regulation of AHN by dietary interventions such as CR and IF is a major interest of our lab and is the main focus of the study. We, and others, have shown that ghrelin mediates the beneficial effects of CR on AHN. It is often assumed that ghrelin will elicit similar effects in other DR paradigms. We selected the 5:2 diet since it is widely practiced by humans, but it has not been well tested experimentally.

      We sought to empirically test how the neurogenic response to 5:2 differed between mice with functional and impaired ghrelin signaling.

      Given that plasma ghrelin levels and AHN are reduced during ageing, we also wanted to determine if 5:2 diet could slow or even prevent neurogenic decline in ageing mice.

      We will re-write the manuscript to ensure that our primary aim is clearly presented. We will also reanalyze the data, with genotype and 5:2 diet as key variables. To help maintain focus, the variable of age will be analyzed separately. This amendment will, we hope, help the reader follow the narrative of our manuscript.

      Another major point: the Discussion is too long. The Authors analyse all the possible reasons why different studies obtained different results concerning the effectiveness of DR in stimulating adult neurogenesis. Thus, the Discussion seems more as a review article dealing with different methods/experimental approaches to evaluate DR effects. We know that sometimes different results are due to different experimental approaches, yet, when an effect is strong and clear, it occurs in different situations. Thus, I think that the Authors must be less shy in expressing their conclusions, also reducing the methodological considerations. It is also well known that sometimes different results can be due to a study not well performed, or to biases from the Authors.

      In our discussion, we felt that it was particularly important to be as rigorous as possible in contextualizing our findings with other published data, whilst highlighting methodological differences. Our aim was to be as precise as possible when comparing findings across studies, however, this resulted in the narrative drifting from the key objectives of our study – namely, to determine the effect of 5:2 diet on neurogenesis and whether or not ghrelin-signalling regulated the process. We will amend the text of the discussion to ensure that the key points of our study are only compared and contrasted with relevant studies in the field. We thank the reviewer for their candid comment.


      Minor comments:

      • This sentence: "There is an age-related decline in adult hippocampal neurogenesis" cannot be put in the HIGHLIGHTS, since is a well known aspect of adult hippocampal neurogenesis

      The reviewer is correct to state this. Our study replicates this interesting age-related phenomenon. However, we will remove it from the ‘Highlights’ section.

      • Images in Figure 5 are not good quality.

      We apologise for this oversight. We will review each figure and panel to ensure that high-resolution images, that are appropriately annotated, are used throughout the manuscript.

      • In general, there are not a lot of images referring to microscopic/confocal photographs across the entire manuscript.

      We structured the manuscript with a limited number of figures and associated microscope captured panels, with the aim of presenting representative images to illustrate the nature and quality of the IHC protocols. However, we will amend the figures for the revised manuscript to provide representative microscopy images, with each group included and clearly annotated.

      • The last sentence of the Discussion "These findings suggest that distinct DR regimens differentially regulate neurogenesis in the adult hippocampus and that further studies are required to identify optimal protocols to support cognition during ageing" is meaningless in the context of the study, and in contrast with the main results. Honestly, my impression is that the Authors do not want to disappoint the conclusions of the previous studies; an alternative is that other Reviewers asked for this previously.

      We do not believe that this statement is contradictory to our findings, as distinct DR paradigms do appear to regulate AHN in different ways. However, we agree that we can be more explicit with regards to our own study findings and will prioritize the conclusions of our study over those of the entire field during revision.

      Reviewer #1 (Significance (Required)):

      value the significance of publishing studies that will advance the field.

      Reviewer #2 (Evidence, reproducibility and clarity (Required)):


      In this manuscript, Roberts et al. investigate the effect of the 5:2 diet on adult hippocampal neurogenesis (AHN) in mice via the ghrelin receptor. Many studies have reported benefits of dietary restriction (DR) on the brain that include increasing neurogenesis and enhancing cognitive function. However, neither the mechanisms underlying the effects of the 5:2 diet, nor potential benefits on the brain, are well understood. The authors hypothesize that the 5:2 diet enhances AHN and cognitive function via ghrelin-receptor signaling. To test this, they placedadolescent and adult ghrelin receptor knockout or wild type mice on either the 5:2 or ad libitum (AL) diet for 6 weeks, followed by spatial memory testing using an object in place (OIP) task. The authors also assessed changes in AHN via IHC using multiple markers for cell proliferation and neural stem cells. The authors observed a decrease in AHN due to age (from adolescent to adult), but not due to diet or ghrelin-receptor signaling. While loss of the ghrelin-receptor impaired spatial memory, the 5:2 diet did not affect cognitive function. The authors conclude that the 5:2 diet does not enhance AHN or spatial memory.

      We thank the reviewer for this summary. We note that there was a significant reduction in new neurones (BrdU+/NeuN+) cells in GHS-R null animals, regardless of age or diet (3 way ANOVA of age, genotype and diet (sexes pooled): Genotype P = 0.0290). These data suggest that the loss of ghrelin receptor signalling does impair AHN. However, we will re-analyse our data in light of reviewer 1 comments to remove ‘age’ as a variable. The new analyses and associated discussion will be presented in our revised manuscript.

      The authors use a 5:2 diet but fail to provide a basic characterization of this dietary intervention. For example, was the food intake assessed? In addition to the time restriction of the feeding, does this intervention also represent an overall caloric restriction or not? According to the provided results, the 5:2 diet does not appear to regulate adult hippocampal neurogenesis contrary to the authors' original hypothesis. Did the authors measure the effects of the 5:2 diet on any other organ system? Do they have any evidence that the intervention itself resulted in any well documented benefits in other cell types? Such data would provide a critical positive control for their intervention.

      This is an important point raised by the reviewer. Currently, we carefully quantified weight change across the duration of the study. However, we do not know whether the 5:2 diet reduced overall food intake or whether it impacted the timing of feeding events. To overcome this limitation, we will now test what impact the 5:2 dietary regime has on food intake and the timing of feeding. This study will allow us to correlate any changes with 5:2 diet. In addition, we have collected tibiae to quantify skeletal growth and have collected both liver and plasma (end point) samples which will be used to assess changes in the GH-IGF-1 axis. These additional studies will allow us to characterise the effects of the 5:2 paradigm on key indicators of physiological growth. These new data will be incorporated into the revised manuscript.

      Based on the effects of ghrelin in other dietary interventions, the authors speculate that the effect of the 5:2 diet is similarly mediated through ghrelin. However, the authors do not provide any basic characterization of ghrelin signaling to warrant this strong focus on the GSH-R mice. While the GSH-R mice display changes in NSC homeostasis and neurogenesis, none of these effects appear to be modified by the 5:2 diet. Thus, the inclusion of the GSH-R mice does not seem warranted and detracts from the main 5:2 diet focus of the manuscript.

      The role of ghrelin signalling via its receptor, GHSR, is a central tenet of our hypothesis. The loxTB-GHS-R null mouse is a well validated model of impaired ghrelin signalling, in which insertion of a transcriptional blocking cassette prevents expression of the ghrelin receptor (ZIgman et al.2005 JCI). We have previously shown that this mouse model is insensitive to calorie restriction (CR) mediated stimulation of AHN, in contrast to WT mice (Hornsby et al. 2016), justifying its suitability as a model for assessing the role of ghrelin signalling in response to DR interventions, such as the 5:2 paradigm. Whilst our findings do not support a role for ghrelin signalling in the context of the 5:2 diet studied, we did follow the scientific method to empirically test the stated hypothesis. While critiques of experimental design are welcome, the removal of these data may perpetuate publication bias in favour of positive outcomes and is something we wish to avoid.

      Neurogenesis is highly sensitive to stress. The 5:2 diet may be associated with stress which could counteract any benefits on neurogenesis in this experimental paradigm. Did the authors assess any measures of stress in their cohorts? Were the mice group housed or single housed?

      We thank the reviewer for raising this point. We have open-field recordings that will now be analysed to assess general locomotor activity, anxiety and exploration behaviour. Additionally, we will assess levels of the stress hormone, ACTH, in end point plasma samples. These datasets will be incorporated into the revised manuscript.

      The authors state that the 5:2 diet led to a greater reduction in body weight (31%) in adolescent males compared to other groups. However, it appears that the cohorts were not evenly balanced and the adolescent 5:2 male mice started out with a significantly higher starting weight (Supplementary Figure 1). The difference in starting weight at such a young age is significantly confounding the conclusion that the 5:2 diet is more effective at limiting weight gain specifically in this group.

      We thank the reviewer for highlighting this limitation. In the revision we will re-focus our discussion around the Δ Body weight repeated measures data, which compares the daily body weight of each group to its baseline value - thereby normalising any intergroup differences in starting weight. Furthermore, we will restructure figures 1 and S1 so that figure 1 presents only the repeated measure Δ Body weight data, while data for body weight both at baseline and on the final day of the study will be presented in figure S1.

      The authors count NSCs as Sox2+S100b- cells. However, the representative S100b staining does not look very convincing. Instead, it would be more appropriate to count Sox2+GFAP+ cells with a single vertical GFAP+ projection. Alternatively, the authors could also count Nestin-positive cells. Additionally, the authors label BrdU+ Sox2+ S100B- cells as "new NSCs". However, it appears that the BrdU labeling was performed approximately 6 weeks before the tissue was collected (Figure 1A). Thus, these BrdU-positive NSCs most likely represent label retaining/quiescent NSCs that divided during the labeling 6 weeks prior but have not proliferated since. As such, the term "new NSC" is misleading and would suggest an NSC that was actively dividing at the time of tissue collection.

      We apologise for presenting low-resolution images – these will be replaced by high-resolution images in the revised manuscript. In this study we have quantified the actively dividing BrdU+/Sox2+/S100B- cells that represent type II NSCs (rather than GFAP+ or Nestin+ type I NSCs) that have incorporated BrdU within the time period of the 6-week intervention. We appreciate the reviewer’s comments concerning the “new NSCs” terminology. We agree that we should be more specific in clarifying that the NSCs identified are those labelled during the 1st week of the 6-week intervention. We will amend this throughout the revised manuscript by re-naming these cells as 6-week old NSCs.

      Overall, this manuscript lacks a clear focus and narrative. Due to a lack of an affect by the 5:2 diet on hippocampal neurogenesis, the authors mostly highlight already well-known effects of aging and Grehlin/GSH-R on neurogenesis. Moreover, the authors repeatedly use age-related decline and morbidities as a rational for their study. However, they assess the effects of the 5:2 diet on neurogenesis only in adolescent and young mature but not aged mice.

      To provide greater clarity, and in accordance with reviewer 1’s comments, we will amend the text throughout to provide a focus on the data obtained. The objective of the changes will be to re-enforce the original study narrative. In relation to the use of the term ‘age-related decline’ or ‘age-related changes’, we think that these are appropriate to our study. Physiological ageing doesn’t begin at a specific point of chronological time, but is a process that is continuously ongoing. Indeed, our data is in agreement with previous studies reporting an age-related reduction in AHN at 6 months of age (e.g Kuhn et al.1996).

      Minor Points

      The authors combine the data from both male and female mice for most bar graphs. While this does not appear to matter for neurogenesis or behavioral readouts, there are very significant sexually dimorphic differences with respect to body size and weight. As such, male and female mice in Figure 1D,F should not be plotted in the same bar graph.

      We agree that sexual dimorphism exists with respect to body size and weight. We used distinct male and female symbols for each individual animal on these bar graphs, but do agree with the reviewer that sexual dimorphic differences should be emphasized. To achieve this, we will include additional supplementary graphs presenting the sex differences in starting weight, final weight, and weight change versus starting weight.

      The Figure legends are very brief and should be expanded to include basic information of the experimental design, statistical analyses etc.

      We thank the reviewer for this comment. We will provide specific experimental detaisl in the revised figure legends.

      Many figures include a representative image. However, it is often unclear if that is a representative image of a WT or mutant mouse, or a 5:2 or control group (Figure 2A, 3A, 4A, 5A).

      We structured the manuscript with a limited number of figures and associated microscope captured panels, with the aim of presenting representative images to illustrate the nature and quality of the IHC protocols. However, we will amend the figures for the revised manuscript to provide representative microscopy images, with each group included and clearly annotated.

      It would be helpful to provide representative images of DCX-positive cells in Figure 3A-F. Additionally, the authors should include a more extensive description of how this quantification was performed in the method section.

      We will revise the manuscript to provide representative high-resolution Dcx+ images displaying cells of each category. The method will also be revised to include a detailed description of how the quantification and classification was performed.

      The authors state "the hippocampal rostro-caudal axis (also known as the dorsoventral[] axis". However, the rostral-caudal and dorsal-central axis are usually considered perpendicular to one another.

      We agree that the dorso-ventral and rostral-caudal axes are anatomically distinct. The terms are often used interchangeably in the literature, which can lead misinterpretations (e.g the caudal portion of dorsal hippocampus is often mislabelled as ventral hippocampus). To avoid ambiguity, mislabelling or misidentification, we will include a supplementary figure detailing our anatomical definitions of the rostral and caudal poles of the hippocampus, alongside representative images and the bregma coordinates.

      Reviewer #2 (Significance (Required)):


      Understanding the mechanisms of a popular form of intermittent fasting (5:2 diet) that is not well understood is an interesting topic. Moreover, examining the effect of this form of intermittent fasting on the brain is timely. Notwithstanding, while the authors use multiple markers to validate the effect of the 5:2 diet on adult hippocampal neurogenesis, concerns regarding experimental design, validation, and data analysis weaken the conclusions being drawn.

      We thank reviewer 2 for this significance statement. We will revise the manuscript, as mentioned above, to clarify the experimental design, improve presentation of the data, and re-focus the narrative of the primary aims of the study.

      Reviewer #3 (Evidence, reproducibility and clarity (Required)):


      Summary


      In this study, Roberts and colleagues used a specific paradigm of intermitted fasting, the 5:2 diet, meaning 5 days ad libitum food and 2 non-consecutive days of fasting. They exposed adolescent and adult wild-type mice and ghrelin receptor knockout mice (GHS-R-/-) for 6 weeks to this paradigm, followed by 1 week ad libitum food. They further used the "object in place task" (OIP) to assess spatial memory performance. At the end of the dietary regime, the authors quantified newborn neurons and neural stem cells (NSCs) by immunohistochemistry. Roberts

      et al. show that the 5:2 diet does not change the proliferation of cells in the hippocampus, but report an increased number of immature neurons (based on DCX) in all the mice exposed to the 5:2 diet. This change however did not result in an increased number of mature adult-born neurons, as assessed by a BrdU birthdating paradigm. The authors further show diet-independent effects of the ghrelin receptor knockout, leading to less adult born neurons, but more NSCs in the adolescent mice and a lower performance in the OIP task.

      Major comments:

      The main conclusion of this study is that a specific type of intermitted fasting (5:2 diet) has no effects on NSC proliferation and neurogenesis. As there are several studies showing beneficial effects of intermitted fasting on adult neurogenesis, while other studies found no effects, it is important to better understand the effects of such a dietary paradigm.

      The experimental approaches used in this manuscript are mostly well explained, but it is overall rather difficult to follow the results part, as the authors always show the 4 experimental groups together (adolescent vs adult and wt vs GHS-R-/-). They highlight the main effects comparing all the groups, which most of the time is the factor "age". Age is a well-known and thus not surprising negative influencer of adult neurogenesis. Instead of focusing on the main tested factor, namely the difference in diet, the authors show example images of the two age classes

      (adolescent vs adult), which does not underly the major point they are making. Most of the time, they do not provide a post hoc analysis, so it is difficult to judge if the results with a significant main effect would be significant in a direct 1 to 1 comparison of the corresponding groups. The authors point out themselves that previous rodent studies did not use such a 5:2 feeding pattern, so having diet, age and genotype as factors at the same time makes the assessment of the diet effect more difficult.

      The manuscript would improve if the authors restructure their data to compare first the diet groups (adolescent wt AL vs 5:2 and in a separate comparison adult wt AL vs 5:2) and only in a later part of the results check if the Ghrelin receptor plays a role or not in this paradigm.

      We thank the reviewer for these comments. In line with comments from the other reviewers we will re-formulate the presentation of our datasets. We will remove ‘age’ as a key variable as age related changes are to be expected. For the revision, we will separate the adolescent and adult mouse data sets, plotting individual graphs for both. This should provide a clearer focus on 5:2 responses in both assessed genotypes.

      This re-configuration will impact the data being analysed and, therefore, the statistical analysis presented. In our original manuscript post hoc analyses were performed, however, only significant post hoc comparisons were highlighted (e.g figure 5). Non-significant post hoc comparisons have not been presented. In the method section of the revised manuscript, we will clarify that we’ll report post hoc differences when they are observed.

      During our study design, we decided to assess diet and genotype in parallel - as part of the same analysis. This seemed to us to be the most appropriate statistical method, so that we assessed dietary responses in both WT and GHS-R null mice.

      As this 5:2 is a very specific paradigm, it is furthermore difficult to compare these results to other studies and the conclusions are only valid for this specific pattern and timing of the intervention (6 weeks). It remains unclear why the authors have not first tried to establish a study with wildtype mice and a similar duration as in previous studies observing beneficial effects of intermitted fasting on neurogenesis. Like this, it would have been possible to make a statement if the 5:2 per se does not increase neurogenesis or if the 6 weeks exposure were just too short.

      The reviewer raises this relevant point which we considered during the study design period. Given that we had previously reported significant modulation of AHN with a relatively short period of 30% CR (14 days followed by 14 days AL refeeding (Hornsby et al.2016)), we predicted that a 6 week course on the 5:2 paradigm (totalling 12 days of complete food restriction over the 6 week period) would provide a similar dietary challenge. The fact that we did not observe similar changes in AHN with this 5:2 paradigm is notable.

      The graphical representation of the data could also be improved. Below are a few

      examples listed:

      1.) Figure 1 B and C, the same symbol and colours are used for the adolescent and adult animals, which makes the graphs hard to read. One colour and symbol per group throughout the manuscript would be better.

      We thank the reviewer for this comment. We will amend the presentation of the graphs throughout the manuscript to ensure that they are easier to interpret.

      2.) The authors found no differences in the total number of Ki67 positive cells in the DG. However, Ki67 staining does not allow to conclude the type of cell which is proliferating. It would thus strengthen the findings if this analysis was combined with different markers, such as Sox2, GFAP and DCX.

      Double labelling of Ki67 positive cells would allow for further insight into the identity of distinct proliferating cell populations. However, quantifying Ki67 immunopositive cells within the sub-granular zone of the GCL, as a single marker, is commonly used in studies of AHN. Given that studies of intermittent fasting, calorie restriction and treatment with exogenous acyl-ghrelin report no effect on NPC cell division, we decided not to pursue this line of inquiry.

      3.) In Figure 3, the authors say that the diet increases the number of DCX in adolescent and adult mice, which is not clear when looking at the graph in 3B. Are there any significant differences when directly comparing the corresponding groups, for instance the WT AL vs the WT 5:2? It is further not clear how the authors distinguished the different types of DCX morphology-wise. The quantification in C and D would need to be illustrated by example images. Furthermore, the colour-code used in these graphs is not explained and remains unclear

      While the 3 way ANOVA does yield a significant overall effect for diet, we agree that it is indeed difficult to see a difference on the graph, although the mean values of the adolescent 5:2 animals are more prominent than the AL counterparts. Mean +/- SEM will be provided in the supplementary section of the revised manuscript. Furthermore, we will clarify the method used to identify distinct DCX+ morphologies, include representative high-resolution images of each DCX+ cell category, and amend the colour coding to avoid misinterpretation.

      4) In Figure 5, the authors show that the number of new NSCs is significantly increased in the adolescent GHS-R-/- mice, independent of the diet, but this increase does not persist in the adult mice. They conclude that "the removal of GHS-R has a detrimental effect on the regulation of new NSC number..." this claim is not substantiated and needs to be reformulated. As the GHS-R-/- mice have a transcriptional blockage of Ghrs since start of its expression, would such an effect on NSC regulation not result in an overall difference in brain development, as ghrelin is also important during embryonic development?

      This is an interesting point. However, we disagree that the statement "the removal of GHS-R has a detrimental effect on the regulation of new NSC number..." is unsubstantiated, since it does not exclude any developmental deficits in these mice that may account for the differences observed. Nonetheless, we will rephrase the sentence to clarify our intended point and remove any ambiguity.

      5.) In Figure 6, the authors asses spatial memory performance with a single behavioral test, the OIP. As these kind of tests are influenced by the animal's motivation to explore, it's anxiety levels, physical parameters (movement) etc., the interpretation of such a test without any additional measured parameters can be problematic. The authors claim that the loss of GHS-R expression impairs spatial memory performance. As the discrimination ratio was calculated, it is not possible to see if there is an overall difference in exploration time between genotypes. This would be a good additional information to display.

      We thank the reviewer for this insight. We have open-field recordings that will now be analysed to assess general locomotor activity, anxiety and exploration behaviour. These data, alongside exploratory time of the mice during the OIP task will be incorporated into the revised manuscript.

      Besides these points listed above, the methods are presented in such a way that they can be reproduced. The experiments contained 10-15 mice per group, which is a large enough group to perform statistical analyses. As mentioned above, the statistical analysis over all 4 groups with p-values for the main effects should be followed by post hoc multiple comparison tests to allow the direct comparison of the corresponding groups.

      Reviewer #3 (Significance (Required)):

      In the last years, growing evidences suggested that IF might have positive effect on health in general and also for neurogenesis. However, a few recent studies report no effects on neurogenesis, using different IF paradigms. This study adds another proof that not all IF paradigms influence neurogenesis and shows that more work needs to be done to better understand when and how IF can have beneficial effects. This is an important finding for the neurogenesis field, but the results are only valid for this specific paradigm used here, which limits its significance. The reporting of such negative findings is however still important, as it shows that IF is not just a universal way to increase neurogenesis. In the end, such findings might have the potential to bring the field together to come up with a more standardized dietary intervention paradigm, which would be robust enough to give similar results across laboratories and mouse strains, and would allow to test the effect of genetic mutations on dietary influences of neurogenesis.

      We thank the reviewer for their insightful and thorough feedback.

      1. Description of the revisions that have already been incorporated in the transferred manuscript

      Please insert a point-by-point reply describing the revisions that were already carried out and included in the transferred manuscript. If no revisions have been carried out yet, please leave this section empty.

      The manuscript has not been revised at this stage.

      2. Description of analyses that authors prefer not to carry out

      Please include a point-by-point response explaining why some of the requested data or additional analyses might not be necessary or cannot be provided within the scope of a revision. This can be due to time or resource limitations or in case of disagreement about the necessity of such additional data given the scope of the study. Please leave empty if not applicable.

      • *

      We have included in our replies to the reviewers a description of the amendments that we will make to our manuscript. Two requested revisions stand out as being unnecessary or cannot be provided within the scope of a revision.

      The first was the request to perform the 5:2 study in older mice. This an interesting suggestion, however, the expense and time needed to maintain mice into old age (e.g >18 months) cannot be provided within the scope of our revision. In addition, given that we report no effect of the 5:2 paradigm on AHN in adolescent (7 week old) and adult (7 month old) mice, there is less justification for such a study in older mice.

      The second request, that we disagree with, was to remove data relating to the GHS-R null mice (see reviewer 2, point 2). The role of ghrelin signalling via its receptor, GHS-R, is a central tenet of our hypothesis. Whilst our findings do not support a role for ghrelin signalling in the context of the 5:2 diet studied, we followed the scientific method to empirically test the stated hypothesis. While critiques of experimental design are welcome, the removal of such data may perpetuate publication bias in favour of positive outcomes and is something we wish to avoid.

    2. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #3

      Evidence, reproducibility and clarity

      In this study, Roberts and colleagues used a specific paradigm of intermitted fasting, the 5:2 diet, meaning 5 days ad libitum food and 2 non-consecutive days of fasting. They exposed adolescent and adult wildtype mice and ghrelin receptor knockout mice (GHS-R-/-) for 6 weeks to this paradigm, followed by 1 week ad libitum food. They further used the "object in place task" (OIP) to assess spatial memory performance. At the end of the dietary regime, the authors quantified newborn neurons and neural stem cells (NSCs) by immunohistochemistry. Roberts et al. show that the 5:2 diet does not change the proliferation of cells in the hippocampus, but report an increased number of immature neurons (based on DCX) in all the mice exposed to the 5:2 diet. This change however did not result in an increased number of mature adult-born neurons, as assessed by a BrdU-birthdating paradigm. The authors further show diet-independent effects of the ghrelin receptor knockout, leading to less adult born neurons, but more NSCs in the adolescent mice and a lower performance in the OIP task.

      Major comments:

      The main conclusion of this study is that a specific type of intermitted fasting (5:2 diet) has no effects on NSC proliferation and neurogenesis. As there are several studies showing beneficial effects of intermitted fasting on adult neurogenesis, while other studies found no effects, it is important to better understand the effects of such a dietary paradigm.

      The experimental approaches used in this manuscript are mostly well explained, but it is overall rather difficult to follow the results part, as the authors always show the 4 experimental groups together (adolescent vs adult and wt vs GHS-R-/-). They highlight the main effects comparing all the groups, which most of the time is the factor "age". Age is a well-known and thus not surprising negative influencer of adult neurogenesis. Instead of focusing on the main tested factor, namely the difference in diet, the authors show example images of the two age classes (adolescent vs adult), which does not underly the major point they are making. Most of the time, they do not provide a post hoc analysis, so it is difficult to judge if the results with a significant main effect would be significant in a direct 1 to 1 comparison of the corresponding groups. The authors point out themselves that previous rodent studies did not use such a 5:2 feeding pattern, so having diet, age and genotype as factors at the same time makes the assessment of the diet effect more difficult. The manuscript would improve if the authors restructure their data to compare first the diet groups (adolescent wt AL vs 5:2 and in a separate comparison adult wt AL vs 5:2) and only in a later part of the results check if the Ghrelin receptor plays a role or not in this paradigm.

      As this 5:2 is a very specific paradigm, it is furthermore difficult to compare these results to other studies and the conclusions are only valid for this specific pattern and timing of the intervention (6 weeks). It remains unclear why the authors have not first tried to establish a study with wildtype mice and a similar duration as in previous studies observing beneficial effects of intermitted fasting on neurogenesis. Like this, it would have been possible to make a statement if the 5:2 per se does not increase neurogenesis or if the 6 weeks exposure were just too short.

      The graphical representation of the data could also be improved. Below are a few examples listed:

      1. Figure 1 B and C, the same symbol and colours are used for the adolescent and adult animals, which makes the graphs hard to read. One colour and symbol per group throughout the manuscript would be better.
      2. The authors found no differences in the total number of Ki67 positive cells in the DG. However, Ki67 staining does not allow to conclude the type of cell which is proliferating. It would thus strengthen the findings if this analysis was combined with different markers, such as Sox2, GFAP and DCX.
      3. In Figure 3, the authors say that the diet increases the number of DCX in adolescent and adult mice, which is not clear when looking at the graph in 3B. Are there any significant differences when directly comparing the corresponding groups, for instance the WT AL vs the WT 5:2? It is further not clear how the authors distinguished the different types of DCX morphology-wise. The quantification in C and D would need to be illustrated by example images. Furthermore, the colour-code used in these graphs is not explained and remains unclear.
      4. In Figure 5, the authors show that the number of new NSCs is significantly increased in the adolescent GHS-R-/- mice, independent of the diet, but this increase does not persist in the adult mice. They conclude that "the removal of GHS-R has a detrimental effect on the regulation of new NSC number..." this claim is not substantiated and needs to be reformulated. As the GHS-R-/- mice have a transcriptional blockage of Ghrs since start of its expression, would such an effect on NSC regulation not result in an overall difference in brain development, as ghrelin is also important during embryonic development?
      5. In Figure 6, the authors asses spatial memory performance with a single behavioral test, the OIP. As these kind of tests are influenced by the animal's motivation to explore, it's anxiety levels, physical parameters (movement) etc., the interpretation of such a test without any additional measured parameters can be problematic. The authors claim that the loss of GHS-R expression impairs spatial memory performance. As the discrimination ratio was calculated, it is not possible to see if there is an overall difference in exploration time between genotypes. This would be a good additional information to display.

      Besides these points listed above, the methods are presented in such a way that they can be reproduced. The experiments contained 10-15 mice per group, which is a large enough group to perform statistical analyses. As mentioned above, the statistical analysis over all 4 groups with p-values for the main effects should be followed by post hoc multiple comparison tests to allow the direct comparison of the corresponding groups.

      Minor comments:

      The authors should provide more information in the figure legends and always show representative images of the parameters analyzed. Some of the images are also of low resolution and should be replaced with higher resolution images (for instance Fig. 5A). The significant P values of the multiple comparison between groups should be added into the figures.

      Significance

      In the last years, growing evidences suggested that IF might have positive effect on health in general and also for neurogenesis. However, a few recent studies report no effects on neurogenesis, using different IF paradigms. This study adds another proof that not all IF paradigms influence neurogenesis and shows that more work needs to be done to better understand when and how IF can have beneficial effects. This is an important finding for the neurogenesis field, but the results are only valid for this specific paradigm used here, which limits its significance. The reporting of such negative findings is however still important, as it shows that IF is not just a universal way to increase neurogenesis. In the end, such findings might have the potential to bring the field together to come up with a more standardized dietary intervention paradigm, which would be robust enough to give similar results across laboratories and mouse strains, and would allow to test the effect of genetic mutations on dietary influences of neurogenesis.

    1. describing it as more personalised and diverse than TV

      Though I am not a TikTok user and also don't get news from YouTube, I would say that a big part of the appeal of online information sources in general is the fact that it's able to be personalized to your interests and what is relevant to you. You're able to curate a lot more of what you consume and what you avoid rather than just accepting what news sources decided was relevant to present to everyone.

    2. On TV we always see the same things, but on YouTube, Spotify, TikTok, we have a range of diversity. … We can get all this and see that there is diversity, society far beyond just what we live.

      It's difficult to wrap your head around this, but it's true, there are more sources on YouTube and other media, possibly more variety of people to agree with. But interestingly the loss of credibility and journalism is ok with this source. There are some social media based journalists that do a good job, but almost three times as much online that spread misinformation.

    3. What makes these networks so appealing to some younger audiences? Qualitative interviews reveal that they are drawn to the informal, entertaining style of visual media (and particularly online video) platforms – describing it as more personalised and diverse than TV, as a resource for rapidly changing events such as the Russia–Ukraine conflict, and as a venue for niche interests, from pop culture to travel to health and well-being.

      I really love how it's described in this excerpt; "..informal, entertaining style of visual media..". Visual social media has been on the rise for quite sometime now, especially as TikTok grew all around the world. This gave a new platform for many new things, new populations, and new content. News reports can now be fitted to your phones screen right as your scrolling through social media. Genius right? It is nice don't get me wrong but people can twist and turn "news" just exactly how they can in a post, through an article, or on TV. it has brought a lot of awareness to lots of different issues which is a very great thing, but again, it comes back to practicing effective media literacy!

    4. Yet many young people are not necessarily avoiding all news. In fact, many of them are selectively avoiding topics like politics and the Coronavirus specifically.

      When it comes to avoiding news, politics and the Coronavirus are the two topics that stand out. I cannot say I do not agree because I am in the same arena of avoidance. The phrase "beating a dead horse" comes to mind. With politics and media sources tending to hold a bias, you cannot be certain what is true and what has been taken out of context. It wasn't until I was in my 20's when I took notice that in my area, the democrats I know tend to get their news from CNN while the republicans get theirs from FOX. I feel like there might be some confirmation bias taking place. With Coronavirus, I just feel like there's only so much a person can take. Many people, including myself, lost someone to covid, and it's hard to watch the numbers and statistics while simultaneously having people say it's a hoax and has to do with the election.

    5. Use of TikTok for news has increased fivefold among 18–24s across all markets over just three years, from 3% in 2020 to 15% in 2022

      With a potential ban on TikTok constantly looming and trials for said ban well underway in Washington already, I'm curious to know that if TikTok does in fact get banned in the United States, where yuoung people will go for their news. TikTok has already been subject to a lot of scrutiny for deep fakes, false information, etc., so it's a bit concerning that 15% of the 18-24 demographic is getting their news from the platform. I know there are a lot of really good, informed creators on the app, but the algorithm doesn't always push them to the forefront. Honestly, it might be better for our society if TikTok does go away for good.

    1. Moral Relativism (saying that what is good or bad is just totally subjective, and depends on who you ask.)

      There is typically no right or wrong answer on certain things when it comes to moral judgement. People holding different ethics makes them believe in different things, having different scales on determine the value. Even though some of the ethics frameworks overlaps, one thing that might be acceptable for Confucianism may have another explanation in Taoism, and it's unfair to rate them on the same scale.

    2. Something is right or wrong because God(s) said so. Euthyphro Dilemma: “Is the pious [action] loved by the gods because it is pious, or is it pious because it is loved by the gods?” (Socrates, 400s BCE Greece) If the gods love an action because it is morally good, then it is good because it follows some other ethics framework. If we can figure out which ethics framework the gods are using, then we can just apply that one ourselves without the gods. If, on the other hand, an action is morally good because it is loved by the gods, then it doesn’t matter whether it makes sense under any ethics framework, and it is pointless to use ethics frameworks.1

      There's an argument on the Christian God's existence that the universe is too perfect (containing all the laws that should make the universe function as it should) to have not been made by a creator. Though I think that if we look at the Divine Command Theory from a perspective that "it's good because it's loved by the gods, or it's loved by the god's because it's good" is a bit counter-intuitive. If we take this from a Christian perspective and that God made the universe according to his will and the laws within it (including the ten commandments), then it is set on the standards on what God also believes to be pious in action. Thus, the said action is only pious according to God because he said so. Still, even as someone who was raised with the Christian faith, I'm not entirely set on the Divine Command Theory because it kind of undermines the belief that God also gave us free will to do what we want. Speaking in the extreme sense here, even if someone was doing what was considered morally wrong in the eyes of others (say, killing someone for a ritual sacrifice because God demanded it), most people would hesitate on whether or not a being like God would actually permit such violence (unless in the case of self defense. Furthermore, ritual sacrifice was done away with for the Christian beliefs because they believe that Jesus became the sacrificial lamb so that there wouldn't have to be anymore physical sacrifices. Not to mention if the being they were to worship was something like the "Spaghetti god", we're not exactly sure if those can be taken seriously even if the following is a recognized cult by law). And considering that some words can be twisted to suit the needs of selfish people, if we are following the Christian perspective it would violate the 2nd Commandment of 'taking the Lord's name in vain', as it does not permit people twisting the Word of God for selfish benefit (E.G. political movements, etc.).

    1. But it’s not just haphazardly formatted messages and borderline digital harassment (one Mothership client emailed me upwards of eleven times a day in the lead-up to Election Day 2020) that distinguish the Mothership formula—their work occasionally drifts into outright deceit. Their emails often use the “From” to dupe the recipient: one message from Stop Republicans PAC, an organization I’d never even heard of, sent an email with a “From” line labeled as “⚑ Flight Confirmed,” while the subject line included my email address followed by “Your flight confirmation-ZWCLXT 20NOV.” Of course, the email had nothing to do with a flight I was taking; it was a reference to Mike Pence flying to Atlanta to rally for Republicans in the 2020 Georgia Senate run-offs.

      !

    1. Some people argue that the Welch’s t-test should be the default choice for comparing the means of two independent groups since it performs better than the Student’s t-test when sample sizes and variances are unequal between groups, and it gives identical results when sample sizes are variances are equal. In practice, when you are comparing the means of two groups it’s unlikely that the standard deviations for each group will be identical. This makes it a good idea to just always use Welch’s t-test, so that you don’t have to make any assumptions about equal variances.
    1. Author Response

      Reviewer #1 (Public Review):

      Sorkac et al. devised a genetically encoded retrograde synaptic tracing method they call retro-Tango based on their previously developed anterograde synaptic tracing method trans-Tango. The development of genetically encoded trans-synaptic tracers has long been a difficult stumbling block in the field, and the development of trans-Tango a few years back was a breakthrough that was immediately, widely, and successfully applied. The recent development of the retrograde tracer method BActrace was also exciting for the field, but requires lexA driver lines and required by its design the test of candidate presynaptic neurons instead of an unbiased test for connectivity.

      Retro-Tango now provides an unbiased retrograde tracer. They cleverly used the same reporter system as for trans-Tango by reversing the signaling modules to be placed in pre-synaptic neurons instead of post-synaptic neurons. Therefore, synaptic tracing leads to the labeling of pre-synaptic neurons under the regulation of the QUAS system. Using visual, olfactory as well sexually dimorphic circuits authors went about providing examples of specificity, efficiency, and usefulness of the retro-Tango method. The authors successfully demonstrated that many of the known pre-synaptic neurons can be successfully and specifically labelled using the retro-Tango method.

      Most importantly, because it is based on the most used, very well tested and widely adopted trans-Tango method, retro-Tango promises to not just be a clever development, but a really widely and well-used technique as well. This is an outstanding contribution.

      We would like to thank Dr. Hiesinger for his very kind words and for the overall appreciation of the contribution of the development of retro-Tango to the field. We are also grateful for the suggestions below aimed at improving the clarity of our manuscript. We individually address the points raised by Dr. Hiesinger below.

      Reviewer #2 (Public Review):

      Tools that enable labeling and genetic manipulations of synaptic partners are important to reveal the structure and function of neural circuits. In a previous study, Barnea and colleagues developed an anterograde tracing method in Drosophila, trans-TANGO, which targets a synthetic ligand to presynaptic terminals to activate a postsynaptic receptor and trigger nuclear translocation of a transcription factor. This allows the labeling and genetic manipulation of cells postsynaptic to the ligand-expressing starter cells. Here, the same group modified trans-TANGO by targeting the ligand to the dendrites of starter cells to genetically access pre-synaptic partners of the starter cells; they call this method retro-TANGO. The authors applied retro-TANGO to various neural circuits, including those involved in escape response, navigation, and sensory circuits for sex peptides and odorants. They also compared their retro-TANGO data with synaptic connectivity derived from connectivity obtained from serial electron microscopy (EM) reconstruction and concluded that retro-TANGO can allow trans-synaptic labeling of presynaptic neurons that make ~ 17 synapses or more with the starter cells.

      Overall, this study has generated and characterized a valuable retrograde transsynaptic tracing tool in Drosophila. It's simpler to use than the recently described BAcTrace (Cachero et al., 2020) and can also be adapted to other species. However, the manuscript can be substantially strengthened by providing more quantitative data and more evidence supporting retrograde specificity.

      We thank Dr. Luo for his kind words and his assessment of the value of retro-Tango as a new tool in the transsynaptic labeling toolkit in Drosophila. We followed the suggestions of Dr. Luo for providing more quantitative data and addressing the specificity and directionality of retro-Tango. We strongly believe that the implementation of his suggestions did enhance the quality of our manuscript.

      Reviewer #3 (Public Review):

      This is a valuable addition to the currently available arsenal of methods to study the Drosophila brain.

      There are many positives to the present manuscript as it is:

      (i) The introduction makes a clear and fair comparison with other available tracing methods.

      (ii) The authors do a systematic analysis of the factors that influence the labeling by retro-tango (age, temperature, male versus female, etc...)

      (iii) The authors acknowledge that there are some limitations to retro-TANGo. For example, the fact that retro-T does not label all the expected neurons as indicated by the EM connectome. This is fine because no technique is perfect, and it is very laudable that the authors did a serious study of what one should expect from retro-tango (for example, a threshold determined by the number of synapses between the connected neurons).

      We would like to thank the reviewer for the kind words and the positive assessment of our manuscript. In addition, we would like to acknowledge the reviewer for the recommendations below, which we followed and we think made our manuscript stronger.

    1. ÅVLQ\KWUNWZ\QVO\WZMIL\PMIKZWVaU[J]\IT[W_MSVW_\PI\_M[MVLthese acronyms often when we are not laughing, too!

      This is something that just came up with my friends and I, we were talking about when we send each other TikTok's we think are funny, but we don't actually laugh out loud how we still send lol because it's so normalized.