10,000 Matching Annotations
  1. Aug 2022
    1. Author Response

      Reviewer #1 (Public Review):

      McLachlan and colleagues find surprisingly widespread transcriptional changes occurring in C. elegans neurons when worms are prevented from smelling food for 3 hours. Focusing most of the paper on the transcription of a single olfactory receptor, the authors demonstrate many molecular pathways across a variety of neurons that can cause many-fold changes in this receptor. There is some evidence that the levels of this single receptor can adjust behavior. I believe that the wealth of mostly very convincing data in this paper will be of interest to researchers who think about sensory habituation, but I think the authors' framing of the paper in terms of hunger is misleading.

      There is a lot to like about this paper, but I just cannot get over how off the framing is. Unless I am severely misunderstanding, the paper is about sensory habituation, but the word habituation is not used in the paper. Instead, we hear very often about hunger (6x), state (92x), and sensorimotor things (23x). This makes little sense to me. The worms are "fasted" (111x) for 3 hours, but most of the expression changes are reversed if the worms can smell, but not eat, the food. And I've heard about the fasted state, noting that worms don't eat more food after this type of "fasting". So what is with all of this hunger/state discussion?

      We think that the most straightforward interpretation of our data is that both sensory experience and internal nutritional state modulate str-44 expression. However, we agree that in the previous manuscript draft there was a disproportionate emphasis on state (as compared to sensory experience). The revised manuscript corrects this. However, several results in the manuscript do suggest that state is important, so we have not removed this from the manuscript. The lines of evidence that suggest this are:

      (1) Animals exposed to inedible aztreonam-treated food show an increase in str-44 expression compared to animals exposed to untreated, ingestable food. Thus, food ingestion acts to suppress str-44 expression (Figure 1E).

      (2) Animals exposed to food odor in the absence of food show an intermediate level of str-44 expression between “on bacteria” and “off bacteria” controls (Figure 1E). This incomplete suppression suggests that food odors alone can not explain the suppression of str-44 expression in well-fed animals.

      (3) Animals that lack intestinal rict-1, a component of the TOR2 nutrient-sensing complex, show an increase in str-44 expression, which suggests that nutrient sensing in the intestine impacts str-44 expression (Figure 5).

      (4) When animals are off food, osmotic stress inhibits the upregulation of str-44 (Figure 1G), reduces the enhanced behavioral sensitivity to butyl acetate (Figure 2G), and reduces the enhanced AWA activity in response to food (Figure 3). This physiological stressor provides a competing state that also impacts str44 expression.

      We apologize for not adequately describing how three hours of fasting impacts C. elegans behavior in the initial submission. This is obviously a key piece of information and we have corrected this in the revised manuscript. [lines 68-70; 123-126] Regarding pharyngeal pumping rates, C. elegans typically exhibits pharyngeal pumping at a near-maximal rate on the OP50 laboratory diet even when well-fed.

      Consequently, even much longer starvation times will fail to induce more feeding under these conditions. However, many other feeding-related behaviors do change with three hours of fasting, such as velocity on and off food, turning rates, roaming/dwelling behavior on OP50 food, and sensitivity to odorants. Thus, three hours of fasting is sufficient to impact several food search behaviors.

      To more directly address whether sensory habituation in AWA alters str-44 expression, we performed an additional experiment. We exposed wild-type animals to the str-44 odorants butyl acetate or propyl acetate and measured str-44 expression. If habituation explains this effect (e.g. repeated exposure of an odorant reduces transcription/translation of the receptor), we would expect that exposure to these odorants would reduce str-44 expression in “off bacteria” animals. However, we observed no differences between odor-exposed animals and controls. [Figure 4-figure supplement 2B; lines 414-421]

      And the discussion of internal states is often naïve. In the second paragraph of the introduction, we are told that "Recent work has identified specific cell populations that can induce internal states", beginning with AgRP neurons, which have been known to control the hunger state in mammals for nearly 40 years |||(Clark J. T., Kalra P. S., Crowley W. R., Kalra S. P. (1984). Neuropeptide Y and human pancreatic polypeptide stimulate feeding behavior in rats. Endocrinology 115 427-429. Hahn T. M., Breininger J. F., Baskin D. G., Schwartz M. W. (1998). Coexpression of Agrp and NPY in fasting-activated hypothalamic neurons. Nat. Neurosci. 1 271-272). Instead, the authors cite three papers from 2015, whose major contribution was to show that AgRP activity surprisingly decreases when animals encounter food. These papers absolutely did not identify AgRP neurons as inducing internal states or driving behavioral changes typical of hunger (Aponte, Y., Atasoy, D., and Sternson, S. M. (2011). AGRP neurons are sufficient to orchestrate feeding behavior rapidly and without training. Nat. Neurosci. 14, 351-355. doi: 10.1038/nn.2739; Krashes, M. J., Koda, S., Ye, C., Rogan, S. C., Adams, A. C., Cusher, D. S., et al. (2011). Rapid, reversible activation of AgRP neurons drives feeding behavior in mice. J. Clin. Invest. 121, 1424-1428. Doi: 10.1172/jci46229). Nor did Will Allen's work in Karl Deisseroth's lab discover neurons that drive thirst behaviors.

      We agree that this introductory paragraph did not do justice to the literature and improperly cited only relatively recent work. We have addressed this oversight. [lines 48-53]

      Later in the same paragraph, we hear that: "However, animals can exhibit more than one state at a time, like hunger, stress, or aggression. Therefore, the sensorimotor pathways that implement specific motivated behaviors, such as approach or avoidance of a sensory cue, must integrate information about multiple states to adaptively control behavior." This is undoubtedly true, but it's not clear what it has to do with any of the data in this paper - I don't even think this is really about hunger, much less the interaction between hunger and other drives.

      To summarize: I think the authors could give the writing of the paper a serious rethink. I want to stay far away from telling people how to write their papers, so if the authors insist on framing this obviously sensory paper as being about hunger and sensorimotor circuitry I think they should at least explain to their readers why they are doing that in light of the evidence against it (and I think they should state clearly that worms don't actually eat more in this fasted state).

      Please see the comments above that address these concerns.

      I was also surprised by how unsurprised the authors seemed by the incredibly widespread changes they observed after 3 hours away from food. Over 1400 genes change at least 4-fold? That seems like a lot to me. But the authors, maybe for narrative reasons, only comment on how many of them are GPCRs (16.5%, which isn't that much of an overrepresentation compared to 8.5% in the whole genome). For me, these widespread and strong changes are much of the takeaway from this paper. But it does make you wonder how important the activity of one particular GPCR (selected more or less randomly) could be to the changes the worm undergoes when it can't smell food.

      We agree with the reviewer that given the widespread gene expression changes in fasted animals, the changes in AWA are only a small part of the picture. We have added a discussion of this to the revised manuscript. In addition, we provide some discussion of how our gene expression profiling results relate to others in the field. For example, animals that lack the fasting-responsive transcription factor DAF-16 have been shown to have >3,000 genes differentially expressed relative to controls (Kaletsky, Lakhina et al., 2016). Given the large number of genes changing in those data and in our data, it is possible that transcriptional changes are extremely widespread during fasting. [lines 588-593]

      str-44 is very convincingly upregulated when worms can't smell food, but it's clear from the data that this upregulation has very little to do with the actual lack of eating, and more with the lack of being able to sense bacteria for 3 hours. In Figure 1E, when worms are fasted, but in the presence of bacteria, receptor levels are largely unchanged (there are 5 outliers, out of ~50 samples). Since receptor expression doesn't change in this case even though the worms are in the fasted state, it cannot be "state-dependent" - unless the state is not having smelled food for the last 3 hours. And, in my opinion, that would divorce the word "state" from its ordinary meaning.

      We have more closely examined that dataset, but we don’t feel that it would be accurate to say that the aztreonam (inedible) condition matches the fed. The highest points in the aztreonam-treated condition are most visible on the plot, but the effect is driven by the bulk of the data. Even if we remove the top 5 datapoints from the aztreonam condition, the effect is still statistically significant. Moreover, we performed this experiment over multiple days and the effect was present on each day. However, the reviewer’s point is well taken that sensory experience is equally (if not more) important for str-44 regulation and the text of the initial manuscript did not properly reflect this. As described above, we have modified the revised manuscript so that it is more balanced.

      The authors argue that str-44 expression modulates food-seeking behavior in fasted worms by causing them to preferentially seek out butyl and propyl acetate. However, the behavioral data to back this up has me a little worried. For example, take Figures 2F and 2G. They are the exact same experiment: comparing how many worms choose 1:10,000 butyl acetate compared to ethanol when the worms are either fasted or fed. In the first experiment (2F), ~70% chose butyl acetate for fasted worms and ~60% for fed worms. But in the replicate, ~60% choose butyl acetate for fasted worms and ~50% for fed worms. A 10% variability in baseline behavior is fine (but not what I would call a huge state change), but when the difference between conditions is the same size as baseline variability I start to disbelieve. Can the authors explain this variability? Or am I misunderstanding?

      We and others often observe large variance in C. elegans chemotaxis behavior over time because of small changes in environmental variables such as temperature, humidity, and pressure, so it is standard to always run wild-type controls together with all experimental groups and compare within day. The experiment in Figure 2F was conducted before the others in Figure 2G and Figure 4F. However, we remain highly confident in this result – we observed a difference in fed vs starved every time that we ran this experiment, which (in sum total for wild-type) was on 6 different days, with at least 3 plates per day (40-200 worms per plate).

      And I'll say it just one last time, I think the authors are overselling their results...or at least the str-44 and AWA results (they are dramatically underselling the results that show the widespread changes in the expression level of 10% of the genome in response to not smelling food for 3 hours):

      "Our results reveal how diverse external and internal cues... converge at a single node in the C. elegans nervous system to allow for an adaptive sensorimotor response that reflects a complete integration of the animal's states."

      This implies that str-44 expression AWA is the determinant of whether a worm will act fasted or fed. I have already expressed why I don't believe this is the case (inedible bacteria experiment, Figure 1E), but just because things like osmotic stress suppress the upregulation of str-44, that doesn't mean that it is the site of convergence. It could be any of the other 1400 genes that changed 4+ fold with bacterial deprivation. And even in terms of the actual AWA neuron, it was chosen because it showed modest upregulation of chemoreceptors (1.8 fold compared to ~1.5 fold in ASE and ASG), even though chemoreceptors were highly upregulated in other neurons as well.

      We agree that AWA chemoreceptors alone are unlikely to explain all of the behavioral changes observed in an animal that has been removed from food, and we certainly did not intend to imply that str-44 expression in AWA is the central determinant of whether the animal acts as though it is fasted or fed. Rather, we have shown that str-44 expression can explain some of these behavioral changes. We have added language throughout the manuscript to indicate that we expect other fasting-regulated genes to be of importance. See also: response to Essential Revision #1.

      Overall, and despite my critiques (and possibly tone), I really like this paper and think there really is a lot of interesting data in there.

    1. Maybe it’s just me noticing this a bit more lately, but it feels as if people are a lot less engaged with the world around them and they don’t really care about their surroundings. And you can see it everywhere when you start paying attention.

      Well, yes – this is the kind of thing that can be 100% a product of starting to pay attention.

  2. scattered-thoughts.net scattered-thoughts.net
    1. I like to organize all the state in a program so that it's reachable from some root pointer. This makes it easy for the reader to understand all the resources allocated by the program by just following the tree of types. It also makes it easy to find any piece of state from within a debugger, or to write little debugging helpers that eg check global invariants or print queue lengths.
    1. Note: This rebuttal was posted by the corresponding author to Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      Reviewer #1 (Evidence, reproducibility and clarity (Required)):

      In recent years, the field has investigated crosstalk between cGMP and cAMP signaling (PMID: 29030485), lipid and cGMP signaling (PMID: 30742070), and calcium and cGMP signaling (PMID: 26933036, 26933037). In contrast to the Plasmodium field, which has benefited from proteomic experiments (ex: PMID 24594931, 26149123, 31075098, 30794532), second messenger crosstalk in T. gondii has been probed predominantly through genetic and pharmacological perturbations. The present manuscript compares the features of A23187- and BIPPO-stimulated phosphoproteomes at a snapshot in time. This is similar to a dataset generated by two of the authors in 2014 (PMID: 24945436), except that it now includes one BIPPO timepoint. The sub-min​​ute phosphoproteomic timecourse following A23187 treatment in WT and ∆cdpk3 parasites is novel and would seem like a useful resource.

      CDPK3-dependent sites were detected on adenylate cyclase, PI-PLC, guanylate cyclase, PDE1, and DGK1. This motivated study of lipid and cNMP levels following A23187 treatment. The four PDEs determined to have A23187-dependent phosphosites were characterized, including the two PDEs with CDPK3-dependent phosphorylation, which were found to be cGMP-specific. However, cGMP levels do not seem to differ in a CDPK3- or A23187-dependent manner. Instead, cAMP levels are elevated in ∆cdpk3 parasites. This would seem to implicate a feedback loop between CDPK3, the adenylyl cyclase, and PKA/PKG: CDPK3 activity reduces adenylyl cyclase activity, which reduces PKA activity, which increases PKG activity. The authors don't pursue this direction, and instead characterize PDE2, which does not have CDPK3-dependent phosphosites, and seems out of place in the study

      Response:

      We agree with reviewer 1 that a feedback loop between CDPK3, the adenylyl cyclase and PKA/PKG is certainly one of several possibilities (and we acknowledge this in the manuscript).

      We felt, however, that given the observation that A23187 and BIPPO treatment leads to phosphorylation of numerous PDEs (hinting at the presence of an Ca2+-regulated feedback loop), it was entirely relevant to study these in greater detail. Coupled with the A23187 egress assay on ΔPDE2 parasites - our findings suggest that PDE2 plays an important role in this signalling loop (an entirely novel finding). While PDE2 appears to exert its effects in a CDPK3-independent manner (indeed suggesting that CDPK3 might exert its effects on cAMP levels in a different fashion), this does not detract from the important finding that PDE2 is one of the (likely numerous) components that is regulated in a Ca2+-dependent feedback loop to regulate egress.

      We have modified our writing to better reflect the fact that our decision to pursue study of the PDEs was not solely CDPK3-centric.

      While we feel that our reasoning for studying the PDEs is solid, we appreciate that further clarification on the putative CDPK3-Adenylate cyclase link would make it easier for the reader to follow the rationale.

      We have not studied the direct link between CDPK3 and the Adenylate Cyclase β in more detail, as ACβ alone was shown to not play a major role in regulating lytic growth (Jia et al., 2017).

      **MAJOR COMMENTS**

      1.Some of the key conclusions are not convincing.

      The data presented in Figure 6E, F, and G and discussed in lines 647-679 are incongruent. In Figure 6E, the plaques in the PDE2+RAP image are hardly visible; how can it be that the plaques were accurately counted and determined not to differ from vehicle-treated parasites?

      Are the images in 6E truly representative? Was the order of PDE1 and PDE2 switched? The cited publication by Moss et al. 2021 (preprint) is not in agreement with this study, as stated. That preprint determined that parasites depleted of PDE2 had significantly reduced plaque number and plaque size (>95% reduction); and parasites depleted of PDE1 had a substantially reduced plaque size but a less substantial reduction in plaque number.

      Response:

      The plaques for PDE2+RAP were counted using a microscope since they are difficult to see by eye. We thank the reviewer for detecting our incorrect reference to Moss et al. (2021). This has been corrected in the text. We confirm, however, that the images in 6E are representative of what we observed and do indeed differ from what was seen by Moss et al.. We have acknowledged this clearly in the text.

      The differences cannot easily be explained other than by the different genetic systems used. Further studies of the individual PDEs will likely illuminate their role in invasion/ growth, but we feel this would be beyond the scope of this study.

      Unfortunately, the length of time required for PDE depletion (72h) is incompatible with most T. gondii cellular assays (typically performed within one lytic cycle, 40-48h). Although the authors performed the assays 3 days after initial RAP treatment, is there evidence that non-excised parasites don't grow out of the population. This should be straightforward to test: treat, wait 3 days, infect onto monolayers, wait 24-48h fix, and stain with anti-YFP and an anti-Toxoplasma counterstain. The proportion of the parasite population that had excised the PDE at the time of the cellular assays will then be known, and the reader will have a sense of how complete the observed phenotypes are. As a reader, I will regard the phenotypes with some level of skepticism due to the long depletion time, especially since a panel of PDE rapid knockdown strains (depletion in __Response:

      1. Cellular assays using KO parasites are commonly performed at the point at which protein depletion is detected. Both our western blots and plaque assay results demonstrate that, at the point of assay, there is no substantial outgrowth of non-excised parasites. The original manuscript also includes PCRs performed at the 72 hr time point (See Fig. 6B) to support this.
      2. We appreciate the reviewer’s comment re the panel of PDE KD strains. The reviewer notes that there are substantial limitations to conditional KO systems, which similarly applies to KD systems - there are notable pros and cons to each approach. When designing our strategy (pre-publication of the Moss et al., 2022), we made a deliberate decision to use conditional KO strains in light of the fact that residual protein levels in KD systems can cause significant problems, particularly for membrane proteins (all of the investigated PDEs have a transmembrane domain). Tagging of proteins with the degradation domain can have further issues, leading to protein mis-localisation, which we have experienced with several unrelated proteins in the lab.

        The authors should qualify some of their claims as preliminary or speculative, or remove them altogether.

      The claims in lines 240-260 are confusing. It seems likely that the two drug treatments have at least topological distinctions in the signaling modules, given that cGMP-triggered calcium release is thought to occur at internal stores, whereas A23187-mediated calcium influx likely occurs first at the parasite plasma membrane.The authors' proposed alternative, that treatment-specific phosphosite behavior arises from experimental limitations and "mis-alignment", is unsatisfying for the following reasons: (1) From the outset, the authors chose different time frames to compare the two treatments (15s for BIPPO vs. 50s for A23187); (2) the experiment comprises a single time point, so it does not seem appropriate to compare the kinetics of phosphoregulation. There is still value in pointing out which phosphosites appear treatment-specific under the chosen thresholds, but further claims on the basis of this single-timepoint experiment are too speculative. Lines 264-267 and 281-284 should also be tempered.

      Relatedly, graphing of the data in Figure 1G (accompanying the main text mentioned above) was confusing. Why is one axis a ratio, and the other log10 intensity? What does log10 intensity tell you without reference to the DMSO intensity? Wouldn't you want the L2FC(A23187) vs. L2FC(BIPPO) comparisons? Could you use different point colors to highlight these cases on plot 1E? Additionally, could you use a pseudocount to include peptides only identified in one treatment condition on the plot in 1E? (Especially since these sites are mentioned in lines 272-278 but are not on the plot)

      Response:

      1. The kinetics of the responses to A23187 and BIPPO are very different. This is why treatment timings are purposely different as they were selected to align pathways to a point where calcium levels peak just prior to calcium re-uptake. We make no mention of kinetic comparisons, and merely demonstrate that at the chosen timepoints, overall signalling correlation is very high. The observation that most of the sites that behave differently between conditions sit remarkably close to the threshold for differential regulation (in the treatment condition where they are not DR - see Fig. 1G) led us to speculate that many of these sites are likely on the cusp of differential regulation. While it is entirely possible that some of these differences are, in fact, treatment specific (and we clearly acknowledge this in the text), we simply state that we cannot confidently discern clear signalling features that allow us to distinguish between the two treatments. We feel that this is an entirely relevant observation given the observed preponderance of both A23187 and BIPPO-dependent DR phosphosites on proteins in the PKG signalling pathway (as current models place this upstream of Ca2+release).
      2. Log10 intensity only serves to spread the data for easier visualisation. The only comparison being made relates to the LFCs. Fig. 1Gi shows the LFC scores (x axis) for all sites regulated following A23187 treatment (for which peptides were also identified in BIPPO treatment). On this plot we have highlighted the sites that are differentially regulated following BIPPO but not A23187 treatment (with red showing the DRup and blue showing the DRdown sites). This demonstrates that many of the sites that are regulated following BIPPO but not A23187 treatment cluster close to the threshold for differential regulation in the A23187 dataset - suggesting that many of these sites are likely on the cusp of differential regulation. Fig. 1Gii shows the reverse. While we could highlight the above-mentioned sites on the plot in Fig. 1E, we do not feel that it would demonstrate our point as clearly.

      We feel that including a pseudocount on Fig. 1E for peptides lacking quantification in one treatment condition would be visually misleading as the direct correlation being made in Fig. 1E is BIPPO vs A23187 treatment. The sites mentioned in lines 272-278 in the original manuscript (now lines 268-276) are available in the supplement tables.

      3.Additional experiments would be essential to support the main claims of the paper.

      Genetic validation is necessary for the experiments performed with the PKA inhibitor H89. H89 is nonspecific even in mammalian systems (PMID: 18523239) and in this manuscript it was used at a high concentration (50 µM) The heterodimeric architecture of PKA in apicomplexans dramatically differs from the heterotetrameric enzymes characterized in metazoans (PMID: 29263246), so we don't know what the IC50 of the inhibitor is, or whether it inhibits competitively. Two inducible knockdown strains exist for PKA C1 (PMID: 29030485, 30208022). The authors could request one of these strains and construct a ∆cdpk3 in that genetic background, as was done for the PDE2 cKO strain. Estimated time: 3-4 weeks to generate strain, 2 weeks to repeat assays.

      Response:

      1. While we appreciate that H89 is not 100% specific for PKA, this is not our only line of evidence that cAMP levels are altered. We demonstrate that cAMP levels are elevated in CDPK3 KO parasites – further substantiating our finding.

      The H89 concentration used in our experiment is in keeping with/lower than the concentrations used in other Toxoplasma publications (Jia et al., 2017), and both the Toxoplasma and Plasmodium fields have shown convincingly that H89 treatment phenocopies cKD/cKO of PKA (see Jia et al., 2017; Flueck et al., 2019).

      While we agree that the genetic validation suggested by reviewer 1 would serve to further support our findings (though it would not provide further novel insights), the suggested time frame for experimental execution was not realistic. Line shipment, strain generation, subcloning and genetic validation would take substantially longer than 3-4 weeks.

      cGMP levels are found to not increase with A23187 treatment, which is at odds with a previous study (lines 524-560). The text proposes that the differences could arise from the choice of buffer: this study used an intracellular-like Endo buffer (no added calcium, high potassium), whereas Stewart et al. 2017 used an extracellular-like buffer (DMEM, which also contains mM calcium and low potassium). An alternative explanation is that 60 s of A23187 treatment does not achieve a comparable amount of calcium flux as 15 s of BIPPO treatment, and a calcium-dependent effect on cGMP levels, were it to exist, could not be observed at the final timepoint in the assay. The experiments used to determine the kinetics of calcium flux following BIPPO and A23187 treatments (Fig. 1B, C) were calibrated using Ringer's buffer, which is more similar to an extracellular buffer (mM calcium, low potassium). In this buffer, A23187 treatment would likely stimulate calcium entry from across the parasite plasma membrane, as well as across the membranes of parasite intracellular calcium stores. By contrast, A23187 treatment in Endo buffer (low calcium) would likely only stimulate calcium release from intracellular stores, not calcium entry, since the calcium concentration outside of the parasite is low. Because calcium entry no longer contributes to calcium flux arising from A23187 treatment, it is possible that the calcium fluxes of A23187-treated parasites at 60 s are "behind" BIPPO-treated parasites at 15 s. The researchers could control these experiments by *either* (i) performing the cNMP measurements on parasites resuspended in the same buffer used in Figure 1B, C (Ringer's) or (ii) measuring calcium flux of extracellular parasites in Endo buffer with BIPPO and A23187 to determine the "alignment" of calcium levels, as was done with intracellular parasites in Figure 1C. No new strains would have to be generated and the assays have already been established in the manuscript. Estimated time to perform control experiments with replicates: 2 weeks. This seems like an important control, because the interpretation of this experiment shifts the focus of the paper from feedback between calcium and cGMP signaling, which had motivated the initial phosphoproteomics comparisons, to calcium and cAMP signaling. Further, the lipidomics experiments were performed in an extracellular-like buffer, DMEM, so it's unclear why dramatically different buffers were used for the lipidomics and cNMP measurements.

      Response:

      While the initial calibration experiments to measure calcium flux were indeed performed in Ringer’s buffer, the parasites were intracellular. We therefore chose to measure cNMP concentrations of extracellular parasites syringe lysed in Endo buffer, which is better at mimicking intracellular conditions than any other described buffer.

      As the reviewer suggested, we measured the calcium flux of extracellular parasites in Endo buffer upon stimulation with either A23187 or BIPPO.

      We found that peak calcium response to BIPPO in Endo buffer was similar to that of intracellular parasites (~15 seconds post treatment) (See Supp Fig. 6A). Upon treatment with A23187, extracellular parasites in Endo buffer had a much faster response compared to their intracellular counterparts, with peak flux measured at ~25 seconds post treatment (see Supp Fig. 6B). This indeed does suggest that extracellular parasites in Endo buffer behave differently to A23187 compared to their intracellular counterparts. However, peak calcium response is still occuring within the experimental time course and is not being missed, as the reviewer worries. Moreover, since we are able to detect increased cAMP levels in A23187 treated parasites, Ca2+ flux appears sufficient to alter cNMP signalling.

      We did notice however that the intensity of the calcium flux was much weaker in Endo buffer compared to intracellular parasites (see Supp Fig. 6B). We found that this was due to the lack of host-derived Ca2+, since supplementation of Endo buffer with 1 uM CaCl2 restored the intensity of the calcium response to match that of intracellular parasites (see Supp Fig. 6C). We therefore decided to repeat our cGMP measurements, this time using extracellular parasites in Endo buffer supplemented with 1 uM CaCl2. However, we found no differences in cGMP levels in the response to ionophore under these conditions (now Supp Fig. 6D) compared to the previous experiments, so the conclusions from the previous data do not change.

      As for the lipidomics experiments, we chose to use DMEM so that our dataset could be compared with other published lipidomic datasets (Katris et al., 2020; Dass et al., 2021) where DMEM was also used as a buffer when measuring global lipid profiles of parasites.

      We now acknowledge in the paper that Endo buffer has its shortcomings, and that this could be the reason why we do not detect changes in cGMP concentrations. We do, however, believe that Endo buffer is the best alternative to intracellular parasites and is supported by its consistent use in numerous publications studying Toxoplasma signalling (McCoy et al., 2012; Stewart et al., 2017).

      Additional information is required to support the claim that PDE2 has a moderate egress defect (lines 681-687). T. gondii egress is MOI-dependent (PMID: 29030485). Although the parasite strains were used at the same MOI, there is no guarantee that the parasites successfully invaded and replicated. If parasites lacking PDE2 are defective in invasion or replication, the MOI is effectively decreased, which could explain the egress delay. Could the authors compare the MOIs (number of vacuoles per host cell nuclei) of the vehicle and RAP-treated parasites at t = 0 treatment duration to give the reader a sense of whether the MOIs are comparable?

      Response:

      Since PDE2 KO parasites have a substantial growth defect, we did notice that starting MOIs were consistently lower for the RAP-treated samples compared to the DMSO-treated samples. However, this was also the case for PDE1 KO parasites where we did not see an egress delay. We also found that the egress delay was still evident for ∆CDPK3 parasites, despite having higher starting MOIs than WT parasites in our experiments. Therefore there does not appear to be a link between starting MOIs and the egress delay.

      To be sure of our results, we also performed egress assays where we co-infected HFFs with mCherry-expressing WT parasites (WT ∆UPRT) and GFP-expressing PDE2 cKO parasites that were treated with either DMSO or RAP or ∆CDPK3 parasites. This recapitulated our previous findings, confirming the deletion of PDE2 leads to delay in A23187-mediated egress.

      4.A few references are missing to ensure reproducibility.

      The manuscript states that the kinetic lipidomics experiments were performed with established methods, but the cited publication (line 497) is a preprint. These are therefore not peer reviewed and should be described in greater detail in this manuscript, including any relevant validation.

      Response:

      We thank the reviewer for pointing this out. We have included a greater description of the methods used in the materials and methods section such that the experiment is reproducible, as per the reviewer’s suggestion. We decided to still make mention of the BioRxiv preprint since we thought it was appropriate for the reader to be informed of ongoing developments in the field.

      Please cite the release of the T. gondii proteomes used for spectrum matching (lines 972-973).

      Response:

      We have included this as per the reviewer’s suggestion.

      Please include the TMT labeling scheme so the analysis may be reproduced from the raw files.

      Response:

      We have included this as per the reviewer’s suggestion in Supp Fig. 3A.

      5.Statistical analyses should be reviewed as follows:

      Have the authors examined the possibility that some changes in phosphopeptide abundance reflect changes in protein abundance? This may be particularly relevant for comparisons involving the ∆cdpk3 strain. Did the authors collect paired unenriched proteomes from the experiments performed? Alternatively, there may be enriched peptides that did not change in abundance for many of the proteins that appear dynamically phosphorylated.

      Response:

      We did not collect unenriched proteomes from the experiments performed (although we did perform unenriched mixing checks to ensure equal loading between samples), and believe that this wasn’t a necessity for the following reasons:

      1. For within-line treatment analyses, treatment timings are so short (a maximum of 15-50s in the single timepoint experiment) that it would be unlikely to detect substantial changes in protein abundance. Moreover, these unlikely events would affect all phosphosites across a protein, and therefore be detectable.

      In our CDPK3 dependency timecourse experiments, we normalise both the WT and ∆CDPK3 strain to 0s, and measure signalling progression over time. Therefore, any difference at timepoints that are not “0” are not originating from basal differences. We also see a consistent increase/decrease in phosphosite detection across the sub-minute timecourse, further confirming that the observed changes are truly down to dynamic changes in phosphorylation and not protein levels.

      In the single timepoint CDPK3 dependency analyses (44 regulated sites identified, Data S2), we acknowledge that there could be some risk of altered starting protein abundance between lines. However, if protein abundance were responsible for the changes in phosphosite detection, we would expect all phosphosites across the protein to shift, and we do not observe this. Moreover, when we look at these CDPK3 dependent proteins and compare their phosphosite abundance in untreated WT and ∆CDPK3 lines, we find that for each protein, either all or the majority of phosphosites detected are unchanged (highlighting that there is no substantial difference in this protein’s abundance between lines). Where there are phosphosite differences between lines, these are only ever on single sites on a protein while most other sites are unchanged - implying that these are changes to basal phosphorylation states and not protein levels.

      It seems like for Figs. 3B and S5 the maximum number of clusters modeled was selected. Could the authors provide a rationale for the number of clusters selected, since it appears many of the clusters have similar profiles.

      The number of clusters is chosen automatically by the Mclust algorithm as the value that maximizes the Bayes Information Criterion (BIC). BIC in effect balances gains in model fit (increasing log-likelihood) against increasing the number of parameters (i.e. number of clusters).

      Please include figure panel(s) relating to gene ontology. Relevant information for readers to make conclusions includes p-value, fold-enrichment or gene ratio, and some sort of metric of the frequency of the GO term in the surveyed data set. See PMID: 33053376 Fig. 7 and PMID: 29724925 Fig. 6 for examples or enrichment summaries. Additionally, in the methods, specify (i) the background set, (ii) the method used for multiple test correction, (iii) the criteria constituting "enrichment", (iv) how the T. gondii genome was integrated into the analysis, (v) the class of GO terms (molecular function, biological process, or cellular component), (vi) any additional information required to reproduce the results (for example, settings modified from default).

      Response:

      We have included the additional information requested in the materials and methods.

      We purposely did not include GO figure panels as our analyses are being done across many clusters, making it very difficult to display this information cohesively. We have included all data in Tables S2-S5. These tables included all the relevant information on p-value, enrichment status, ratio in study/ratio in population, class of GO terms etc.

      The presentation of the lipidomics experiments in Figure 4A-C is confusing. First, the ∆cdpk3/WT ratio removes information about the process in WT parasites, and it's unclear why the scale centers on 100 and not 1. Second, the data in Figure S6 suggests a more modest effect than that represented in Fig. 4; is this due to day to day variability? How do the authors justify pairing WT and mutant samples as they did to generate the ratios?

      Response:

      This is a common strategy used by many metabolomics experts (Bailey et al., 2015; Dass et al., 2021; Lunghi et al., 2022). We had originally chosen to represent the data as a ratio since this form of representation helps get rid of the variability that arises between experiments and allows us to see very clear patterns which would otherwise go unnoticed. This variability arises from the amount of lipids in each sample which varies between parasites in a dish, the batch of FBS and DMEM used, and the solutions and even room temperature used to extract lipids on a given day.

      However, we agree with the reviewer that depicting the data in Figure 4A-C as a ratio of ∆CDPK3/WT parasites can be confusing, so we have now changed the graphs, plotting WT and ∆CDPK3 levels instead, and have moved the ratio of ∆CDPK3/WT to the Supplementary Figure 5.

      The significance test seems to be performed on the difference between the WT and ∆cdpk3 strains, but not relative to the DMSO treatment? Wouldn't you want to perform a repeated measures ANOVA to determine (i) if lipid levels change over time and (ii) if this trend differs in WT vs. mutant strain?

      Response:

      The reviewer correctly points out that ANOVA is often used for time courses, but we must point out that it is not always strictly appropriate since it can overlook the purpose of the individual experiment design, which in this case is, 1) to investigate the role of CDPK3 compared to the WT parental strain, and 2) specifically to find the exact point at which the DAG begins to change after stimulus to match the proteomics time course.

      Our data is clearly biassed towards earlier time points where we have 0, 5, 10, 30, 45 seconds where DAG levels are mostly unchanged compared to the single timepoint 60 seconds which shows a significant difference in DAG using our method of statistical comparison by paired two tailed t-test. Therefore, it would be unwise to use ANOVA when we really want to see when the A23187 stimulus takes effect, which appears to be after the 45 second mark. Therefore, analysing the data by ANOVA would likely provide a false negative result, where the result is non-significant but there is clearly more DAG in WT than CDPK3 after 60 seconds. T-tests are commonly used when comparing the same cell lines grown in the same conditions with a test/treatment, and in this case the test/treatment is CPDK3 present or absent (Lentini et al., 2020).

      In the main text, it would be preferable to see the data presented as the proteomics experiments were in Figure 4B and 4C, with fold changes relative to the DMSO (t = 0) treatment, separately for WT and ∆cdpk3 parasites.

      Response:

      We have now changed the way that we represent the data, plotting %mol instead of the ratio.

      Signaling lipids constitute small percentages of the overall pool (e.g. PMID: 26962945), so one might not necessarily expect to observe large changes in lipid abundance when signaling pathways are modulated. Is there any positive control that the authors could include to give readers a sense of the dynamic range? Maybe the DGK1 mutant (PMID: 26962945)?

      Response:

      DGK1 is maybe not a good example because the DGK1 KO parasites effectively “melt” from a lack of plasma membrane integrity ((Bullen et al., 2016), so this would likely be technically challenging. We don’t see the added value in including an additional mutant control since we can already see the dynamic change over time from no difference (0 seconds) to significant difference (60 seconds) between WT and CDPK3 for DAG and most other lipids. We already see a significant difference between WT and CDPK3 after 60 seconds for DAG, and we can clearly see in sub-minute timecourses the changes or not at the specific points where the A23187 is added (0-5 seconds), the parasites acclimatise, for the A23187 to take effect (10-30 seconds) and for the parasite lipid response to be visible by lipidomics (45-60 +seconds).

      Figure 4E: are the differences in [cAMP] with DMSO treatment and A23187 treatment different at any of the timepoints in the WT strain? The comparison seems to be WT/∆cdpk3 at each timepoint. Does the text (lines 562-568) need to be modified accordingly?

      Response:

      In WT (and ∆CDPK3) parasites, [cAMP] is significantly changed at 5s of A23187 treatment (relative to DMSO). We have modified our figures to include this analysis. The existing text accurately reflects this.

      Figure 6I: is the difference between PDE2 cKO/∆cdpk3 + DMSO or RAP significant?

      Response

      In our original manuscript, there was no statistical difference in [cAMP] between PDE2cKO/∆CDPK3+DMSO and PDE2cKO/∆CDPK3+DMSO+RAP, likely due to the variation between biological replicates. To overcome the issues in variability between replicates, we have now included more biological replicates (n=7). This has led to a significant difference in [cAMP] between PDE2cKO/∆CDPK3 DMSO- and RAP-treated parasites and between PDE2cKO DMSO- and RAP-treated parasites (now Fig. 6I).

      **MINOR COMMENTS**

      1.The following references should be added or amended:

      Lines 83-85: in the cited publication, relative phosphopeptide abundances of an overexpressed dominant-negative, constitutively inactive PKA mutant were compared to an overexpressed wild-type mutant. In this experimental setup, one would hypothesize that targets of PKA should be down-regulated (inactive/WT ratios). However, the mentioned phosphopeptide of PDE2 was found to be up-regulated, suggesting that it is not a direct target of PKA.

      Response:

      We thank the reviewer for spotting this error, we have now modified our wording.

      Cite TGGT1_305050, referenced as calmodulin in line 458, as TgELC2 (PMID: 26374117).

      Response:

      We have included this as per the reviewer’s suggestion.

      Cite TGGT1_295850 as apical annuli protein 2 (AAP2, PMID: 31470470).

      Response:

      We have included this as per the reviewer’s suggestion.

      Cite TGGT1_270865 (adenylyl cyclase beta, Acβ) as PMID: 29030485, 30449726.

      Response:

      We have included this as per the reviewer’s suggestion.

      Cite TGGT1_254370 (guanylyl cyclase, GC) as PMID: 30449726, 30742070.

      Response:

      We have included this as per the reviewer’s suggestion.

      Note that Lourido, Tang and David Sibley, 2012 observed that treatment with zaprinast (a PDE inhibitor) could overcome CDPK3 inhibition. The target(s) of zaprinast have not been determined and may differ from those of BIPPO (in identity and IC50). The cited study also used modified CDPK3 and CDPK1 alleles, rather than ∆cdpk3 and intact cdpk1 as used in this manuscript. That is to say, the signaling backgrounds of the parasite strains deviate in ways that are not controlled.

      Response:

      While it is true that zaprinast targets have not been unequivocally identified, zaprinast-induced egress is widely thought to be the result of PKG activation, a conclusion that is further supported by the finding that Compound 1 completely blocks zaprinast-induced egress (Lourido, Tang and David Sibley, 2012). Similarly, BIPPO-induced egress is inhibited by chemical inhibition of PKG by Compound 1 and Compound 2 (Jia et al., 2017). Moreover, like zaprinast, BIPPO has been clearly shown to partially overcome the ∆CDPK3 egress delay (Stewart et al., 2017).

      2.The following comments refer to the figures and legends:

      Part of the legend text for 1G is included under 1H.

      Response:

      This has been corrected

      Figure 1H: The legend mentions that some dots are blue, but they appear green. Please ensure that color choices conform to journal accessibility guidelines. See the following article about visualization for colorblind readers: https://www.ascb.org/science-news/how-to-make-scientific-figures-accessible-to-readers-with-color-blindness____/ . Avoid using red and green false-colored images; replace red with a magenta lookup table. Multi-colored images are only helpful for the merged image; otherwise, we discern grayscale better. Applies to Figures 1B, 5C, 6D. (Aside: anti-CAP seems an odd choice of counterstain; the variation in the staining, esp. at the apical cap, is distracting.)

      Response:

      We thank reviewer #1 for bringing this to our attention, and have modified our colour usage for all IFAs and Figures 1H and 3E.

      We chose CAP staining as the antibody is available in the laboratory and stains both the apical end (which has been shown to contain several proteins important for signalling as well as PDE9) and the parasite periphery, the location of CDPK3.

      Figure 1B: When showing a single fluorophore, please use grayscale and include an intensity scale bar, since relative values are being compared.

      Response:

      We have modified this as per the reviewer’s suggestion

      Figure 1C: it is difficult to compare the kinetics of the calcium response when the curves are plotted separately. Since the scales are the same, could the two treatments be plotted on the same axes, with different colors? Additionally, according to the legend, a red line seems to be missing in this panel.

      Response:

      Fig1C is not intended to compare kinetics, merely to show peak calcium release in each separate treatment condition. We have removed mention of a red line in the figure legend.

      Figure 2A: Either Figure S4 can be moved to accompany Figure 2A, or Figure 2A could be moved to the supplemental.

      Figure S4 has now been incorporated into Figure 2.

      Reviewer #1 (Significance (Required)):

      This manuscript would interest researchers studying signaling pathways in protozoan parasites, especially apicomplexans, as CDPK3 and PKG orthologs exist across the phylum. To my knowledge, it is the first study that has proposed a mechanism by which a calcium effector regulates cAMP levels in T. gondii. Unfortunately, the experiments fall short of testing this mechanism.

      Response:

      We thank reviewer #1 for their comments, but disagree with their assessment that the key points of the manuscript “fall short of experimental testing”.

      1. We demonstrate that, following both BIPPO and A23187 treatment, there is differential phosphorylation of numerous components traditionally believed to sit upstream of PKG activation (as well as several components within the PKG signalling pathway itself).
      2. We show that some of these sites are CDPK3 dependent, and that deletion of CDPK3 leads to changes in lipid signalling and an elevation in levels of cAMP (dysregulation of which is known to alter PKG signalling).
      3. We show that pre-treatment with a PKA inhibitor is able to largely rescue this phenotype.
      4. We demonstrate that a cAMP-specific PDE is phosphorylated following A23187 treatment (i.e. Ca2+ flux)
      5. We show that this cAMP specific PDE plays a role in A23187-mediated egress.
      6. While the latter PDE may not be directly regulated by CDPK3, these findings suggest that there are likely several Ca2+-dependent kinases that contribute to this feedback loop.

        Reviewer #2 (Evidence, reproducibility and clarity (Required)):

      **Summary:**

      Provide a short summary of the findings and key conclusions (including methodology and model system(s) where appropriate).

      In this manuscript, Dominicus et al investigate the elusive role of calcium-dependent kinase 3 during the egress of Toxoplasma gondii. Multiple functions have already been proposed for this kinase by this group including the regulation of basal calcium levels (24945436) or of a tyrosine transporter (30402958). However, one of the most puzzling phenotypes of CDPK3 deficient tachyzoites is a marked delay in egress when parasites are stimulated with a calcium ionophore that is rescued with phosphodiesterase (PDE) inhibitors. Crosstalk between, cAMP, cGMP, lipid and calcium signalling has been previously described to be important in regulating egress (26933036, 23149386, 29030485) but the role of CDPK3 in Toxoplasma is still poorly understood.

      Here the authors first take an elegant phosphoproteomic approach to identify pathways differentially regulated upon treatment with either a PDE inhibitor (BIPPO) and a calcium ionophore (A23187) in WT and CDPK3-KO parasites. Not much difference is observed between BIPPO or A23187 stimulation which is interpreted by the authors as a regulation through a feed-back loop.

      The authors then investigate the effect of CDPK3 deletion on lipid, cGMP and cAMP levels. The identify major changes in DAG, phospholipid, FFAs, and TAG levels as well as differences in cAMP levels but not for cGMP. Chemical inhibition of PKA leads to a similar egress timing in CDPK3-KO and WT parasites upon A23187 stimulation.

      As four PDEs appeared differentially regulated in the CDPK3-KO line upon A23187, the authors investigate the requirement of the 4 PDEs in cAMP levels. They show diverse localisation of the PDEs with specificities of PDE1, 7 and 9 for cGMP and of PDE2 for cAMP. They further show that PDE1, 7 and 9 are sensitive to BIPPO. Finally, using a conditional deletion system, they show that PDE1 and 2 are important for the lytic cycle of Toxoplasma and that PDE2 shows a slightly delayed egress following A23187 stimulation.

      **Major comments:**

      -Are the key conclusions convincing?

      The title is supported by the findings presented in this study. However I am not sure to understand why the authors imply a positive feed back loop. This should be clarified in the discussion of the results.

      Response:

      We believe in a positive feedback loop as, upon A23187 treatment (resulting in a calcium flux), ΔCDPK3 parasites are able to egress, albeit in a delayed manner. This egress delay is substantially, but not completely, alleviated upon treatment with BIPPO (a PDE inhibitor known to activate the PKG signalling pathway). In conjunction with our phosphoproteomic data (where we see phosphorylation of numerous pathway components upstream of PKG upon BIPPO and A23187 treatment - both in a CDPK3 dependent and independent manner), these observations suggest that calcium-regulated proteins (CDPK3 among them) feed into the PKG pathway. As deletion of CDPK3 delays egress, it is reasonable to postulate that this feedback is one that amplifies egress signalling (i.e. is positive).

      The phosphoproteome analysis seems very strong and will be of interest for many groups working on egress. However, the key conclusion, i.e. that a substrate overlaps between PKG and CDPK3 is unlikely to explain the CDPK3 phenotype, seems premature to me in the absence of robustly identified substrates for both kinases.

      Response:

      We certainly do not fully exclude the possibility of a substrate overlap but do lean more heavily towards a feedback loop given (a) the inability to clearly detect treatment-specific signalling profiles and (b) the phospho targets observed in the A23187 and BIPPO phosphoproteomes. We have further clarified our reasoning, and overall tempered our language in the manuscript as per the reviewer’s suggestion.

      I am not sure there is a clear key conclusion from the lipidomic analysis and how it is used by the authors to build their model up. Major changes are observed but how could this be linked with CDPK3, particularly if cGMP levels are not affected?

      Response:

      Our phosphoproteomic analyses identify several CDPK3-dependent phospho sites on phospholipid signalling components (DGK1 & PI-PLC), suggesting that there is indeed altered signalling downstream of PKG. To test whether these lead to a measurable phenotype, we performed the lipidomics analysis. We did not pursue this arm of the signalling pathway any further as we postulated that the changes in the lipid signalling pathway were less likely to play a role in the feedback loop. Nevertheless, we felt that it was worthwhile to include these findings in our manuscript as they support the conclusions drawn from the phosphoproteomics - namely that lipid signalling is perturbed in CDPK3 mutants. We, or others, may follow up on this in future.

      We agree with the reviewer that it is surprising that cGMP levels remain unchanged in our experiments when we treat with A23187. Given the measurable difference in cAMP levels between WT and ΔCDPK3 parasites, we postulate that CDPK3 directly or indirectly downregulates levels of cAMP. This would, in turn, alter activity of the cAMP-dependent protein kinase PKAc. Jia et al. (2017) have shown a clear dependency on PKG for parasites to egress upon PKAc depletion, but were also unable to reliably demonstrate cGMP accumulation in intracellular parasites. Similarly, their hypothesis that dysregulated cGMP-specific PDE activity results in altered cGMP levels has not been proven (the PDE hypothesised to be involved has since been shown to be cAMP-specific).

      While it is possible that our collective inability to observe elevated cGMP levels is explained by the sensitivity limits of the assay, it is similarly possible that cAMP-mediated signalling is exerting its effects on the PKG signalling pathway in a cGMP-independent manner.

      The evidence that CDPK3 is involved in cAMP homeostasis seems strong. However, the analysis of PKA inhibition is a bit less clear. The way the data is presented makes it difficult to see whether the treatment is accelerating egress of CDPK3-KO parasites or affecting both WT and CDPK3-KO lines, including both the speed and extent of egress. This is important for the interpretation of the experiment.

      Response:

      Fig. 4F shows that there is a significant amount of premature egress in both WT and ∆CDPK3 parasites following 2 hrs of H89 pre-treatment (consistent with previous reports that downregulation of cAMP signalling stimulates premature egress). When we subsequently investigated A23187-induced egress rates of the remaining intracellular H89 pre-treated parasites (Fig. 4Gi-ii) we found that the ∆CDPK3 egress delay was largely rescued. We have moved Fig. 4F to the supplement (now Supp Fig. 5E) in order to avoid confusion between the distinct analyses shown in 4F (pre-treatment analyses) and 4G (egress experiment). These experiments provided a hint that cAMP signalling is affected, which we then validate by measuring elevated cAMP levels in CDPK3 mutant parasites.

      The biochemical characterisation of the four PDE is interesting and seems well performed. However, PDE1 was previously shown to hydrolyse both cAMP and cGMP (____https://doi.org/10.1101/2021.09.21.461320____) which raises some questions about the experimental set up. Could the authors possibly discuss why they do not observe similar selectivity? Could other PDEs in the immunoprecipitate mask PDE activity? In line with this question, it is not clear what % of "hydrolytic activity (%)" means and how it was calculated.

      The experiments describing the selectivity of BIPPO for PDE1, 7 and 9 as well as the biological requirement of the four tested PDEs are convincing.

      Response:

      We believe that the disagreement between our findings and those published by Moss and colleagues are due to the differences in experimental conditions. We performed our assays at room temperature for 1 hour with higher starting cAMP concentrations (1 uM) compared to them. They performed their assays at 37ºC for 2 hours with 10-fold lower starting cAMP concentrations (0.1 uM). We have now repeated this set of experiments using the Moss et al. conditions, and find that PDEs 1, 7 and 9 can be dual specific, while PDE2 is cAMP-specific, thereby recapitulating their findings (Now included in the revised manuscript under Supp Fig. 7B). However, we also now performed a timecourse PDE assay using our original conditions and show that the cAMP hydrolytic activity for PDE1 can only be detected following 4 hours of incubation, compared to cGMP activity that can be detected as early as 30 minutes, suggesting that it possesses predominantly cGMP activity (See Supp Fig. 7C). We therefore believe that our experimental setup is more stringent, because if one starts with a lower level of substrate and incubates for longer and at a higher temperature, even minor dual activity could make a substantial difference in cAMP levels. Our data suggests that the cAMP hydrolytic activity of PDEs 1, 7 and 9 is substantially lower than the cGMP hydrolytic activity that they display.

      We have also included a clear description of how % hydrolytic activity was calculated in the methods section.

      -Should the authors qualify some of their claims as preliminary or speculative, or remove them altogether?

      The claim that CDPK3 affects cAMP levels seems strong however the exact links between CDPK3 activity, lipid, cGMP and cAMP signalling remain unclear and it may be important to clearly state this.

      Response:

      We have modified our wording in the text to more clearly describe our current hypothesis and reasoning.

      -Would additional experiments be essential to support the claims of the paper? Request additional experiments only where necessary for the paper as it is, and do not ask authors to open new lines of experimentation.

      I think that the manuscript contains a significant amount of experiments that are of interest to scientists working on Toxoplasma egress. Requesting experiments to identify the functional link between above-mentioned pathways would be out of the scope for this work although it would considerably increase the impact of this manuscript. For example, would it be possible to test whether the CDPK3-KO line is more or less sensitive to PKG specific inhibition upon A23187 induced?

      -Are the suggested experiments realistic in terms of time and resources? It would help if you could add an estimated cost and time investment for substantial experiments.

      The above-mentioned experiment is not trivial as no specific inhibitors of PKG are available. Ensuring for specificity of the investigated phenotype would require the generation of a resistant line which would require significant work.

      __Response: __We agree that this would be an interesting experiment to further substantiate our findings. As indicated by the reviewer, however, the lack of specific inhibitors of PKG means a resistant line would likely be required to ensure specificity.

      -Are the data and the methods presented in such a way that they can be reproduced?

      It is not clear how the % of hydrolytic activity of the PDE has been calculated.

      Response: We have included a clearer description of how % hydrolytic activity was calculated in the methods section.

      -Are the experiments adequately replicated and statistical analysis adequate?

      This seems to be performed to high standards.

      **Minor comments:**

      -Specific experimental issues that are easily addressable.

      I do not have any comments related to minor experimental issues.

      -Are prior studies referenced appropriately?

      Most of the studies relevant for this work are cited. It is however not clear to me why some important players of the "PKG pathway" are not indicated in Fig 1H and Fig 3E, including for example UGO or SPARK.

      Response:

      We have modified Fig 1H and 3E to include all key players involved in the PKG pathway.

      -Are the text and figures clear and accurate?

      While all the data shown here is impressive and well analysed, I find it difficult to read the manuscript and establish links between sections of the papers. The phosphoproteome analysis is interesting and is used to orientate the reader towards a feedback mechanism rather than a substrate overlap. But why do the authors later focus on PDEs and not on AC or CNBD, as in the end, if I understand well, there is no evidence showing a link between CDPK3-dependent phosphorylation and PDE activity upon A23187 stimulation?

      Response:

      We thank reviewer#2 and appreciate their constructive feedback re the flow of the manuscript.

      Our key findings from the phosphoproteomics study were that 1) BIPPO and A23187 treatment trigger near identical signalling pathways, 2) that both A23187 and BIPPO treatment leads to phosphorylation of numerous components both upstream and downstream of PKG signalling (hinting at the presence of an Ca2+-regulated feedback loop) and 3) several of the abovementioned components are phosphorylated in a CDPK3 dependent manner.

      While several avenues of study could have been pursued from this point onwards, we chose to focus on the feedback loop in a broader sense as its existence has important implications for our general understanding of the signalling pathways that govern egress.

      We reasoned that, given the differential phosphorylation of 4 PDEs following A23187 and BIPPO treatment (none of which had been studied in detail previously), it was relevant to study these in greater detail.

      Coupled with the A23187 egress assay on PDE2 knockout parasites - our findings suggest that PDE2 plays a role in the abovementioned Ca2+ signalling loop. While PDE2 may not exert its effects in a CDPK3-dependent manner (and CDPK3 may, therefore, alter cAMP levels in a different fashion), this does not detract from the important finding that PDE2 is one of the (likely numerous) components that is regulated in a Ca2+-dependent feedback loop to facilitate rapid egress.

      We have modified our wording to better reflect our rationale for studying the PDEs irrespective of their CDPK3 phosphorylation status.

      While we feel that our reasoning for studying the PDEs is solid, we do appreciate that further clarification on the putative CDPK3-Adenylate cyclase link would elevate the manuscript substantially. However, given the data that the ACb is not playing a sole role in the control of egress, this is likely a non-trivial task and requires substantial work.

      It is also unclear how the authors link CDPK3-dependent elevated cAMP levels with the elevated basal calcium levels they previously described. This is particularly difficult to reconcile particularly in a PKG independent manner.

      Response:

      We previously postulated that elevated Ca2+ levels allowed ΔCDPK3 mutants to overcome a complete egress defect, potentially by activating other CDPKs (e.g. CDPK1). It is similarly plausible that elevated Ca2+ levels in ΔCDPK3 parasites may lead to elevated cAMP levels in order to prevent premature egress.

      As noted in our previous responses, we acknowledge that our inability to detect cGMP is surprising. However, given the clarity of our cAMP findings, and the phosphoproteomic evidence to suggest that various components in the PKG signalling pathway are affected, we postulate that we are either unable to reliably detect cGMP due to sensitivity issues, or that cAMP is exerting its regulation on the PKG pathway in a cGMP-independent manner. As noted previously, while the link between cAMP and PKG signalling has been demonstrated by Jia et al., it is not entirely clear how this is mediated.

      The presentation of the lipidomic analysis is also not really clear to me. Why do the authors show the global changes in phospholipids and not a more detailed analysis?

      Response:

      We performed a detailed phospholipid profile of WT and ∆CDPK3 parasites under normal culture conditions. However, due to the sheer quantity of parasites required for this detailed analysis, we were unable to measure individual phospholipid species in our A23187 timecourse. We therefore opted to measure global changes following A23187 stimulation.

      As the authors focus on the PI-PLC pathway, could they detail the dynamics of phosphoinositides? I understand that lipid levels are affected in the mutant but I am not sure to understand how the authors interpret these massive changes in relationship with the function of CDPK3 and the observed phenotypes.

      Response:

      Our phosphoproteomic analyses identified several CDPK3-dependent phospho sites on phospholipid signalling components (DGK1 & PI-PLC), suggesting that (in keeping with all of our other data), there is altered signalling downstream of PKG. To test whether these changes lead to a measurable phenotype, we performed the lipidomics analysis. Following stimulation with A23187, we found a delayed production of DAG in ∆CDPK3 parasites compared to WT parasites. Since DAG is required for the production of PA, which in turn is required for microneme secretion, our finding can explain why microneme secretion is delayed in ∆CDPK3 parasites, as previously reported (Lourido, Tang and David Sibley, 2012; McCoy et al., 2012).

      We did not follow this arm of the signalling pathway any further as we postulated that the changes in the lipid signalling pathway were less likely to play a role in the feedback loop. Nevertheless, we felt that it was worthwhile to include these findings in our manuscript as they support the conclusions drawn from the phosphoproteomics - namely that lipid signalling is perturbed in CDPK3 mutants. We, or others, may follow up on this in future.

      Finally, the characterisation of the PDEs is an impressive piece of work but the functional link with CDPK3 is relatively unclear. It would also be important to clearly discuss the differences with previous results presented in this this preprint: https://doi.org/10.1101/2021.09.21.461320____.

      My understanding is while the authors aim at investigating the role of CDPK3 in A23187 induced egress, the main finding related to CDPK3 is a defect in cAMP homeostasis that is not linked to A23187. Similarly, the requirements of PDE2 in cAMP homeostasis and egress is indirectly linked to CDPK3. Altogether I think that important results are presented here but divided into three main and distinct sections: the phosphoproteomic survey, the lipidomic and cAMP level investigation, and the characterisation of the four PDEs. However, the link between each section is relatively weak and the way the results are presented is somehow misleading or confusing.

      Response:

      As mentioned in a previous response, we chose to study PDEs in greater detail because of our observation that both A23187 and BIPPO treatments lead to their phosphorylation (hinting at the presence of a Ca2+regulated feedback loop). We were particularly intrigued to study the cAMP specific PDE, as CDPK3 KO parasites suggested that cAMP may play a role in the Ca2+ feedback mechanism. As PDE2 may not be directly regulated by CDPK3, Ca2+ appears to exert its feedback effects in numerous ways. We have modified our wording to better reflect our rationale for studying the PDEs irrespective of their CDPK3 phosphorylation status.

      -Do you have suggestions that would help the authors improve the presentation of their data and conclusions?

      This is a very long manuscript written for specialists of this signalling pathway and I would suggest the authors to emphasise more the important results and also clearly state where links are still missing. This is obviously a complex pathway and one cannot elucidate it easily in a single manuscript.

      Response:

      We have included an additional summary in our conclusions to better illustrate our findings and clarify any missing links.

      Reviewer #2 (Significance (Required)):

      -Describe the nature and significance of the advance (e.g. conceptual, technical, clinical) for the field.

      This is a technically remarkable paper using a broad range of analyses performed to a high standard.

      -Place the work in the context of the existing literature (provide references, where appropriate).

      The cross-talk between cAMP, cGMP and calcium signalling is well described in Toxoplasma and related parasites. Here the authors show that, in Toxoplasma, CDPK3 is part of this complex signalling network. One of the most important finding within this context is the role of CDPK3 in cAMP homeostasis. With this in mind, I would change the last sentence of the abstract to "In summary we uncover a feedback loop that enhances signalling during egress and links CDPK3 with several signalling pathways together."

      Response:

      In light of feedback received from several reviewers, we have made our wording less CDPK3 centric - as our findings relate in part to CDPK3 and, in a broader sense, to a Ca2+ driven feedback loop.

      The genetic and biochemical analyses of the four PDEs are remarkable and highlight consistencies and inconsistencies with recently published work that would be important to discuss and will be of interest for the field.

      __Response: __We thank reviewer#2 and agree that the PDE findings are of significant importance to the field.

      While I understand the studied signalling pathway is complex, I think it would be important to better describe the current model of the authors. In the discussion, the authors indicate that "the published data is not currently supported by a model that fits most experimental results." I would suggest to clarify this statement and discuss whether their work helps to reunite, correct or improve previous models.

      __Response: __We have expanded on the abovementioned statement to clarify that the presence of a feedback loop is a major pillar of knowledge required for the complete interpretation of existing signalling data.

      Could the authors also speculate about a potential role of PDE/CDPK3 in host cell invasion as cAMP signalling has be shown to be important for this process (30208022 and 29030485)?

      __Response: __Existing literature (Jia et al., 2017) suggests that perturbations to cAMP signalling play a very minor role in invasion since parasites where either ACα or ACβ are deleted show no impairment in invasion levels. We currently do not have substantial data on invasion, and are not sure that pursuing this is valuable given the minor phenotypes observed in other studies.

      -State what audience might be interested in and influenced by the reported findings.

      This paper is of great interest to groups working on the regulation of egress in Toxoplasma gondii and other related apicomplexan pathogens.

      -Define your field of expertise with a few keywords to help the authors contextualize your point of view. Indicate if there are any parts of the paper that you do not have sufficient expertise to evaluate.

      I am working on the cell biology of apicomplexan parasites.

      Reviewer #3 (Evidence, reproducibility and clarity (Required)):

      **Summary:**

      Dominicus et al aimed to identify the intersecting components of calcium, cyclic nucleotides (cAMP, cGMP) and lipid signaling through phosphoproteomic, knockout and biochemical assays in an intracellular parasite, Toxoplasma gondii, particularly when its acutely-infectious tachyzoite stage exits the host cells. A series of experimental strategies were applied to identify potential substrates of calcium-dependent protein kinase 3 (CDPK3), which has previously been reported to control the tachyzoite egress. According to earlier studies (PMID: 23226109, 24945436, 5418062, 26544049, 30402958), CDPK3 regulated the parasite exit through multiple phosphorylation events. Here, authors identified differentially-regulated (DR) phosphorylation sites by comparing the parasite samples after treatment with a calcium ionophore (A23178) and a PDE inhibitor (BIPPO), both of which are known to induce artificial egress (induced egress as opposed to natural egress). When the DCDPK3 mutant was treated with A23187, its delayed egress phenotype did not change, whereas BIPPO restored the egress to the level of the parental (termed as WT) strain, probably by activating PKG.

      The gene ontology enrichment of the up-regulated clusters revealed many probable CDPK3-dependent DR sites involved in cyclic nucleotide signaling (PDE1, PDE2, PDE7, PDE9, guanylate and adenylate cyclases, cyclic nucleotide-binding protein or CNBP) as well as lipid signaling (PI-PLC, DGK1). Authors suggest lipid signaling as one of the factors altered in the CDPK3 mutant, albeit lipidomics (PC, PI, PS, PT, PA, PE, SM) showed no significant change in phospholipids. To reveal how the four PDEs indicated above contribute to the cAMP and cGMP-mediated egress, they examined their biological significance by knockout/knockdown and enzyme activity assays. Authors claim that PDE1,7,9 proteins are cGMP-specific while PDE2 is cAMP-specific, and BIPPO treatment can inhibit PDE1-cGMP and PDE7-cGMP, but not PDE9-cGMP. Given the complexity, the manuscript is well structured, and most experiments were carefully designed. Undoubtedly, there is a significant amount of work that underlies this manuscript; however, from a conceptual viewpoint, the manuscript does not offer significant advancement over the current knowledge without functional validation of phosphoproteomics data (see below). A large body of work preceding this manuscript has indicated the crosstalk of cAMP, cGMP, calcium and lipid signaling cascades. This work provides a further refinement of the existing model In a methodical sense, the work uses established assays, some of which require revisiting to reach robust conclusions and avoid misinterpretation. The article is quite interesting from a throughput screening point of view, but it clearly lacks the appropriate endorsement of the hits.The authors accept that identifying the phosphorylation of a protein does not imply a functional role, which is a major drawback as there is no experimental support for any phosphorylation site of the protein identified through phosphoproteomics. In terms of the mechanism, it is not clear whether and how lipid turnover and cAMP-PKA signaling control the egress phenotype (lack of a validated model at the end of this study).

      Response:

      We thank reviewer #3 for their comments, but respectfully disagree with their assessment that the work presented does not advance current knowledge.

      1. We demonstrate that, following both BIPPO and A23187 treatment, there is differential phosphorylation of numerous components traditionally believed to sit upstream of PKG activation (as well as numerous components within the PKG signalling pathway itself). While it may have been inferred from previous studies that A23187 and BIPPO signalling intersect, this has never been unequivocally demonstrated - nor has a feedback loop ever been shown.

      We provide a novel A23187-driven phosphoproteome timecourse that further bolsters the model of a Ca2+-driven feedback loop.

      We show that deletion of CDPK3 leads to a delay in DAG production upon stimulation with A23187.

      We show that some of the abovementioned sites are CDPK3 dependent, and that deletion of CDPK3 leads to elevated levels of cAMP (dysregulation of which is known to alter PKG signalling).

      We show that pre-treatment with a PKA inhibitor is able to largely rescue this phenotype.

      We demonstrate that a cAMP-specific PDE is phosphorylated following A23187 treatment (i.e. Ca2+ flux)

      We show that this cAMP specific PDE plays a role in egress.

      While the latter PDE may not be directly regulated by CDPK3, these findings suggest that there are likely several Ca2+-dependent kinases that contribute to this feedback loop.

      We also firmly disagree with the reviewer’s assertion that without phosphosite characterisation, we have no support for our model. Following treatment with A23187 (and BIPPO), we clearly show broad, systemic changes (both CDPK3 dependent and independent) across signalling pathways previously deemed to sit upstream of calcium flux. Given the vast number of proteins involved in these signalling pathways, and the multitude of differentially regulated phosphosites identified on each of them, it is highly likely that the signalling effects we observe are combinatorial. Accordingly, we believe that mutating individual sites on individual proteins would be a very costly endeavour which is unlikely to substantially advance our understanding of signalling during egress. Moreover, introducing multiple point mutations in a given protein to ablate phosphorylation may lead to protein misfolding and would therefore not be informative. One of the key aims of this study was to assess how egress signalling pathways are interconnected, and we believe we have been able to show strong support for a Ca2+-driven feedback mechanism in which both CDPK3 and PDE2 play a role through the regulation of cAMP.

      While we agree with the reviewer’s statement that a large body of work preceding this manuscript has indicated the crosstalk of cAMP, cGMP, calcium and lipid signalling cascades, a feedback loop has not previously been shown. We believe that this finding is absolutely central to facilitate the complete interpretation of existing signalling data. Furthermore, no previous studies have gone to this level of detail in either proteomics or lipidomics to analyse the calcium signal pathway in any apicomplexan parasite. We argue that the novelty in our manuscript is that it is a carefully orchestrated study that advances our understanding of the signalling network over time with subcellular precision. The kinetics of signalling is not well understood and we believe that our study is likely the first to include both proteomic and lipidomic analyses over a timecourse during the acute lytic cycle stage of the disease. In doing so, we found evidence for a feedback loop that controls the signalling network spatiotemporally, and we characterise elements of this feedback in the same study.

      **Major Comments:**

      Based on the findings reported here there is little doubt that BIPPO and A23187-induced signaling intersect with each other, as very much expected from previous studies. The authors selected the 50s and 15s post-treatment timing of A23187 and BIPPO, respectively for collecting phosphoproteomics samples. At these time points, which were shown to peak cytosolic Ca2+, parasites were still intracellular (Line #171). How did authors make sure to stimulate the entire signaling cascade adequately, particularly when parasites do not egress within the selected time window? There is significant variability between phosphosite intensities of replicates (Line #186), which may also be attributed to insufficient triggers for the egress across independent experiments. This work must be supported by in vitro egress assays with the chosen incubation periods of BIPPO and ionophore treatment (show the induced % egress of tachyzoites in the 50s and 15s).

      Response:

      1. We appreciate that the reviewer acknowledges that our data clearly shows that BIPPO and A23187-induced signalling intersect. While this may have been expected from previous studies, this has not previously been shown - and is therefore valuable to the field. Specifically, the fact that A23187-treatment leads to phosphorylation of targets normally deemed to sit upstream of calcium release is entirely novel and adds a substantial layer of information to our understanding of how these signalling pathways work together.

      Treatments were purposely selected to align pathways to a point where calcium levels peak just prior to calcium reuptake. At these chosen timepoints, we clearly show that overall signalling correlation is very high. We know from our egress assays using identical treatment concentrations (Fig. 2C), that the stimulations used are sufficient to result in complete egress. We are simply comparing signalling pathways at points prior to egress.

      As mentioned in point 2, we show convincingly that the treatments used are sufficient to trigger complete egress. As detailed clearly in the text, we believe that these variations in intensities between replicates are due to slight differences in timing between experiments (this is inevitable given the very rapid progression of signalling, and the difficulty of replicating exact sub-minute treatment timings). We demonstrate that the reporter intensities associated with DR sites correlate well across replicates (Supp Fig. 3C), suggesting that despite some replicate variability, the overall trends across replicates is very much consistent. This allows us to confidently average scores to provide values that are representative of a site’s phosphorylation state at the timepoint of interest.

      The reviewer’s suggestion that we should demonstrate % egress at the 50s and 15s treatment timepoints is obsolete - we state clearly in the text that parasites have not egressed at these timepoints. Our egress assays (Fig. 2C) further support this.

      The authors discuss that CDPK3 controls the cAMP level and PKA through activation of one or more yet-to-be-identified PDEs(s). cAMP could probably also be regulated by an adenylate cyclase, ACbeta that was found to have CDPK3-dependent phosphorylation sites. If CDPK3 is indeed a regulator of cAMP through the activation of PDEs or ACbeta, it would be expected that the deletion of CDPK3 would perturb the cAMP level, resulting in dysregulation of PKAc1 subunit, which in turn would dysregulate cGMP-specific PDEs (PMID: 29030485) and thereby PKG. All these connections need to explain in a more clear manner with experimental support (what is positive and what is negatively regulated by C____DPK3).

      Response:

      1. We do not firmly state that CDPK3 regulates cAMP by phosphorylation of a PDE - this is one of the possibilities addressed. We acknowledge the possibility that this could also be via the adenylate cyclase (see line 792).

      PMID: 29030485 demonstrates clearly a link between cAMP signalling and PKG signalling, but does not demonstrate how this is mediated. The authors postulate that a cGMP-specific PDE is dysregulated given their observation that PDE2 is differentially phosphorylated in a constitutively inactive PKA mutant, however this was not validated experimentally. We and others (Moss et al., 2022), however, demonstrate that PDE2 is cAMP-specific. This suggests that the model built by PMID: 29030485 requires revisiting. We acknowledge clearly in the text that Jia et al. have shown a link between cAMP and PKG signalling, and hypothesise that CDPK3’s modulation of cAMP levels may affect this (this is in keeping with our phosphoproteomic data).

      Moreover, the egress defect is not due to a low influx of calcium in the cytosol because when the ionophore A23187 was added to the CDPK3 mutant, its phenotype was not recovered. Rather, the defect may be due to the low or null activity of PKG that would activate PI4K to generate IP3 and DAG. The latter would be used as a substrate by DGK to generate PA that is involved in the secretion of micronemes and Toxoplasma egress. In this context, authors should evaluate the role of CDPK3 in the secretion of micronemes that is directly related to the egress of the parasite.

      1. We agree with the reviewer on their point about calcium influx, and have already acknowledged in the text that the feedback loop does not control release of Ca2+ from internal stores as disruption of CDPK3 does not lead to a delay in Ca2+

      We agree, and clearly address in the text, that the egress defect could be due to altered PKG/phospholipid pathway signalling.

      (Lourido, Tang and David Sibley, 2012; McCoy et al., 2012) have both previously shown that microneme secretion is regulated by CDPK3. We therefore do not deem it necessary to repeat this experiment, but have made clearer mention of their findings in our writing.

      When the Dcdpk3 mutant with BIPPO treatment was evaluated, it was observed that the parasite recovered the egress phenotype. It is concluded that CDPK3 could probably regulate the activity of cGMP-specific PDEs. CDPK3 could (in)activate them, or it could act on other proteins indirectly regulating the activity of these PDEs. Upon inactivation of PDEs, an increase in the cGMP level would activate PKG, which will, in turn, promote egress. From the data, it is not clear whether any phosphorylation by CDPK3 would activate or inactivate PDEs, and if so, then how (directly or indirectly). To reach unambiguous interpretation, authors should perform additional assays.

      Response:

      As mentioned previously, given the abundance of differentially regulated phosphosites, we do not believe that mutating individual sites on individual proteins is a worthwhile or realistic pursuit.

      We clearly show systematic A23187-mediated phosphorylation of key signalling components in the PKA/PKG/PI-PLC/phospholipid signalling cascade, and demonstrate that several of these are CDPK3-dependent. We demonstrate that CDPK3 alters cAMP levels (and that the ∆CDPK3 egress delay in A23187 treated parasites is largely rescued following pre-treatment with a PKA inhibitor). We similarly demonstrate that A23187 treatment leads to phosphorylation of numerous PDEs, including the cAMP specific PDE2, and show that PDE2 knockout parasites show an egress delay following A23187 treatment. While PDE2 may not be directly regulated by CDPK3 (suggesting other Ca2+ kinases are also involved), these findings collectively demonstrate the existence of a calcium-regulated feedback loop, in which CDPK3 and PDE2 play a role (by regulating cAMP).

      We acknowledge that we have not untangled every element of this feedback loop, and do not believe that it would be realistic to do so in a single study given the number of sites phosphorylated and pathways involved. We do believe, however, that we have shown clearly that the feedback loop exists - this in itself is entirely novel, and of significant importance to the field.

      On a similar note, a possible experiment that can be done to improve the work would be to treat the CDPK3 mutant with BIPPO in conjunction with a calcium chelator (BAPTA-AM) to reveal, which proteins are phosphorylated prior to activation of the calcium-mediated cascades?

      Response:

      We agree that this would be an interesting experiment to carry out but would involve significant work. This could be pursued in another paper or project but is beyond the scope of this work.

      The manuscript claims that PDE1, PDE7, PDE9 are cGMP specific, and BIPPO inhibits only cGMP-specific PDEs. All assays are performed with 1-10 micromolar cAMP and cGMP for 1h. There is no data showing the time, protein and substrate dependence. Given the suboptimal enzyme assays, authors should re-do them as suggested here. (1) Repeat the pulldown assay with a higher number of parasites (50-100 million) and measure the protein concentration. (2) Set up the PDE assay with saturating amount of cAMP and cGMP, which is critical if the PDE1,7,9 have a higher Km Value for cAMP (means lower affinity) compared to cGMP. An adequate amount of substrate and protein allows the reaction to reach the Vmax. Once you have re-determined the substrate specificity (revise Fig 5D), you should retest BIPPO (Fig 5E) in the presence of cAMP and cGMP. It is very likely that you would find the same result as PDE9 and PfPDEβ (BIPPO can inhibit both cAMP and cGMP-specific PDE), as described previously

      We have repeated our assay using the exact same conditions outlined by Moss et al. This involved using a similar number of parasites, a longer incubation time of 2 hours at a higher temperature (37ºC) and with a lower starting concentration of cAMP (0.1 uM). We demonstrate that we are able to recapitulate both the Moss et al. and Vo et al. (see Supp Fig. 7B). However, we noticed that these reactions were not carried out with saturating cAMP/cGMP concentrations, since all reactions had reached 100% completion at the end of the assay whereby all substrate was hydrolysed. We therefore believe that based on our original assay, as well as the new PDE1 timecourse that we have performed (Supp Fig. 7C), that PDEs 1, 7 and 9 display predominantly cGMP hydrolysing activity, with moderate cAMP hydrolysing activity.

      We also repeated the BIPPO inhibition assay using the Moss et al. conditions, and still observe that the cGMP activity of PDE1 is the most potently inhibited of all 4 PDEs. We also see moderate inhibition of the cAMP activities of PDE1 and PDE9, suggesting that cAMP hydrolytic activity can also be inhibited. Interestingly, the cGMP hydrolytic activities of PDEs 7 & 9, which were previously inhibited using our original assay conditions, no longer appear to be inhibited. This is likely due to the longer incubation time, which masks the reduced activities of these two PDEs following treatment with BIPPO.

      The authors did not identify any PKG substrate, which is quite surprising as cAMP signaling itself could impact cGMP. Authors should show if they were able to observe enhanced cGMP levels in BIPPO-treated sample (which is expected to stimulate cGMP-specific PDEs). The author mention their inability to measure cGMP level but have they analyzed cGMP in the positive control (BIPPO-treated parasite line)? Why have they focused only on CDPK3 mutant, whereas in their phosphoproteomic data they could see other CDPKs too? It could be that other CDPK-mediated signaling differs and need PKA/PKG for activation.

      In the title, the authors have mentioned that there is a positive feedback loop between calcium release, cyclic nucleotide and lipid signaling, which is quite an extrapolation as there is no clear experimental data supporting such a positive feedback loop so the author should change the title of the paper.

      Response:

      1. As addressed in our previous response to the reviewer, PMID: 29030485 demonstrates clearly a link between cAMP signalling and PKG signalling, but does not confirm how this is mediated. The authors surmise that a cGMP-specific PDE is dysregulated (although the PDE hypothesised to be involved has since been shown to be cAMP-specific), but are similarly unable to detect changes in cGMP levels. This suggests that their model may be incomplete.

      The BIPPO treatment experiment suggested by the reviewer was already included in the original manuscript (see Fig. 4D in original manuscript, now Fig. 4E). With BIPPO treatment we are able to detect changes in cGMP levels.

      We did not deem it to be within the scope of this study to study every single other CDPK. We chose to study CDPK3, as its egress phenotype was of particular interest given its partial rescue following BIPPO treatment. We reasoned that its study may lead us to identify the signalling pathway that links BIPPO and A23187 induced signalling.

      As addressed in greater detail in our response to reviewer #2, the fact that the feedback loop appears to stimulate egress implies that it is positive.

      **Minor Comments:**

      Materials & Methods

      Explanation of parameters is not clear (Line #360-367). Phosphoproteomics with A23187 (8 micromolar) treatment in CDPK3-KO and WT, for 15, 30 and 60s at 37{degree sign}C incubation with DMSO control. Simultaneously passing the DR and CDPK3 dependency thresholds: CDPK3-dependent phosphorylation

      __Response: __We have modified the wording to make this clearer as per the reviewer’s suggestion.

      Line #368: At which WT-A23187 timepoint did the authors identify 2408 DR-up phosphosites (15s, 30s or 60s)? Or consistently in all? It should be clarified?

      __Response: __As already stated in the manuscript (see line 366 in original manuscript, now line 1047), phosphorylation sites were considered differentially regulated if at any given timepoint their log2FC surpassed the DR threshold.

      A23187 treatment of the CDPK3-KO mutant significantly increased the cAMP levels at 5 sec post-treatment, but BIPPO did not show any change. The authors concluded that BIPPO presumably does not inhibit cAMP-specific PDEs. However, the dual-specific PDEs are known to be inhibited by BIPPO, as shown recently (____https://www.biorxiv.org/content/10.1101/2021.09.21.461320v1____). Authors do confirm that BIPPO-treatment can inhibit hydrolytic activity of PfPDEbeta for cAMP as well as cGMP (Line #612). Besides, it was shown in Fig 5E that BIPPO can partially though not significantly block cAMP-specific PDE2. The statements and data conflict each other under different subtitles and need to be reconciled. Elevation of basal cAMP level in the CDPK3 mutant indicates the perturbation of cAMP signaling, however BIPPO data requires additional supportive experiments to conclude its relation with cAMP or dual-specific PDE.

      Response:

      1. The manuscript to which the reviewer refers does not use BIPPO in any of their experiments. They show that continuous treatment with zaprinast blocks parasite growth in a plaque assay, but do not test whether zaprinast specifically blocks the activity of any of the PDEs.

      Having repeated the PDE assay using the Moss et al. conditions (as outlined above), we are now able to recapitulate their findings, showing that PDEs 1, 7 and 9 can display dual hydrolytic activity while PDE2 is cAMP specific. As explained further above, we believe that our original set of experiments are more stringent than the Moss *et al. * To confirm this, we also performed an additional experiment, incubating PDE1 for varying amounts of time using our original conditions (1 uM cAMP or 10 uM cGMP, at room temperature). This revealed that PDE1 is much more efficient at hydrolysing cGMP, and only begins to display cAMP hydrolysing capacity after 4 hours of incubation.

      We also measured the inhibitory capacity of BIPPO on the PDEs using the Moss *et al. * During the longer incubation time, it seems that BIPPO is unable to inhibit PDEs 7 and 9, while with the more stringent conditions it was able to inhibit both PDEs. We reasoned that since BIPPO is unable to inhibit these PDEs fully, the residual activity over the longer incubation period would compensate for the inhibition, eventually leading to 100% hydrolysis of the cNMPs. We also see that while the cGMP hydrolysing capacity of PDE1 is completely inhibited, its cAMP hydrolysing capacity is only partially inhibited. These findings and the fact that PDE2 is not inhibited by BIPPO are in line with our experiments where we measured [cAMP] and showed that treatment with BIPPO did not lead to alterations in [cAMP].

      The method used to determine the substrate specificity of PDE 1,2,7 and 9 resulted in the hydrolytic activity of PDE2 towards cAMP, while the remaining 3 were determined as cGMP-specific. However, PDE1 and PDE9 have been reported as being dual-specific (Moss et al, 2021; Vo et al, 2020), which questions the reliability of the preferred method to characterize substrate specificity by the authors. It is also suggested to use another ELISA-based kit to double check the results.

      Response:

      As outlined above, we have repeated the assay using the conditions described by Moss et al. (lower starting concentrations of cAMP, 2 hour incubation period at 37ºC) and find that we are able to recapitulate the results of both Moss et al. and Vo et al.. However, using the Moss et al. conditions, the PDEs have hydrolysed 100% of the cyclic nucleotide, suggesting that these conditions are less stringent than the ones we used originally using higher starting concentrations of cAMP and incubating for 1 hour only at room temperature. With enzymatic assays it is always important to perform them at saturating conditions (as already suggested by the reviewer) and therefore we believe that our original conditions are more stringent than the results using the Moss et al. conditions.

      Line #607-608: Authors found PDE9 less sensitive to BIPPO-treatment and concluded PDE2 as refractory to BIPPO inhibition; however, the reduction level of activity seems similar as seen in PDE9-BIPPO treated sample? This strong statement should be replaced with a mild explanation.

      __Response: __We have tempered our wording as per the reviewer’s suggestion

      Figures and legends:

      The introductory model in Fig S1 is difficult to understand and ambiguous despite having it discussed in the text. For example, CDPK1 is placed, but only mentioned at the beginning, and the role of other CDPKs is not clear. In addition, the arrows in IP3 and PKG are confusing. The location of guanylate and adenylate cyclase is wrong, and so on... The figure should include only the egress-related signaling components to curate it. The illustration of host cell in orange color must be at the right side of the figure in connection with the apical pole of the parasite (not on the top). Figure legend should also be rearranged accordingly and citations of the underlying components should be included (see below).

      __Response: __We have modified Supp Fig. 1 as per the suggestions of reviewer#2 and #3. We have now modified the localisations of the proteins and have also removed the lines showing the cross talk between pathways. We have also highlighted to the reader that this is only a model and may not represent the true localisations of the proteins, despite our best efforts.

      In Figure 5D, would you please provide the western blot analysis of samples before and after pulling down to demonstrate the success of your immunoprecipitation assay. Mention the protein concentration in your PDE enzyme assay. Please refer to the M&M comments above to re-do the enzyme assays.

      Response:

      We have now included western blots for the pull downs of PDEs 1, 2, 7 and 9 (Supp Fig. 7A). We chose not to measure protein concentrations of samples since all experiments were performed using the same starting parasite numbers, and we do not see large differences in activities between biological replicates of the PDEs.

      Figure legend 1C: Line #194: There is no red-dotted line shown in graph! Correct it!

      __Response: __We have modified this.

      Figure 4Gi-ii: Shouldn't it be labelled i: H89-treatment and ii: A23178, respectively instead of DMSO and H89? (based on the text Line #579).

      __Response: __Our labelling of Fig. 4Gi-ii is correct as panel i parasites were pre-treated with DMSO, while panel ii parasites were pre-treated with H89. Subsequent egress assays on both parasites were then performed using A23187.

      We have modified the figures to include mention of A23187 on the X axis, and modified the figure legend to clarify pre-treatment was performed with DMSO and H89 respectively.

      Bibliography:

      Line #57 and 58: Citations must be selected properly! Carruthers and Sibley 1999 revealed the impact of Ca2+ on the microneme secretion within the context of host cell attachment and invasion, not egress as indicated in the manuscript! Similar case is also valid for the reference Wiersma et al 2004; since the roles of cyclic nucleotides were suggested for motility and invasion. Also notable in the fact that several citations describing the localization, regulation and physiological importance of cAMP and cGMP signaling mediators (PMID: 30449726 , 31235476 , 30992368 , 32191852 , 25555060 , 29030485 ) are either completely omitted or not appropriately cited in the introduction and discussion sections.

      Response:

      We have modified the citations as per the reviewer’s suggestions. We now cite Endo et al., 1987 for the first use of A23187 as an egress trigger, and Lourido, Tang and David Sibley, 2012 for the role of cGMP signalling in egress. We also cite all the GC papers when we make first mention of the GC. We have also removed the Howard et al., 2015 citation (PMID: 25555060) when referring to the fact that BIPPO/zaprinast can rescue the egress delay of ∆CDPK3 parasites.

      Grammar/Language

      Line #31: After "cAMP levels" use comma

      Response:

      We have modified this.

      36: Sentence is not clear. Does conditional deletion of all four PDEs support their important roles? If so, the role in egress of the parasite?

      Response:

      We have clarified our wording as per the reviewer’s suggestion. We state that PDEs 1 and 2 display an important role in growth since deletion of either these PDEs leads to reduced plaque growth. We have not investigated exactly what stage of the lytic cycle this is.

      40: "is a group involving" instead of "are"

      Response:

      We found no mention of “a group involving” in our original manuscript at line 40 or anywhere else in the manuscript, so we are unsure what the reviewer is referring to.

      108: isn't it "discharge of Ca++ from organelle stores to cytosol"?

      __Response: __We thank the reviewer for spotting this error. We have now modified this sentence.

      120: "was" instead of "were"

      __Response: __Since the situation we are referencing is hypothetical, then ‘were’ is the correct tense.

      Reviewer #3 (Significance (Required)):

      There is a significant amount of work that underlies this manuscript; however, from a conceptual viewpoint, the manuscript does not offer significant advancement over the current knowledge without functional validation of phosphoproteomics data. In terms of the mechanism, it is not clear whether and how lipid turnover and cAMP-PKA signaling control the egress phenotype (lack of a validated model at the end of this study).In a methodical sense, the work uses established assays, some of which require revisiting to reach robust conclusions and avoid misinterpretation.

      Compare to existing published knowledge

      A large body of work preceding this manuscript has indicated the crosstalk of cAMP, cGMP, calcium and lipid signaling cascades. This work provides a further refinement of the existing model. The article is quite interesting from a throughput screening point of view, but it clearly lacks the appropriate endorsement of the hits.

      Response:

      Please refer to our first response to reviewer #3 for our full rebuttal to these points. We respectfully disagree with the assessment that the work presented does not advance current knowledge.

      Audience

      Field specific (Apicomplexan Parasitology)

      Expertise

      Molecular Parasitology

      References

      Bailey, A. P. et al. (2015) ‘Antioxidant Role for Lipid Droplets in a Stem Cell Niche of Drosophila’, Cell. The Authors, 163(2), pp. 340–353. doi: 10.1016/j.cell.2015.09.020.

      Bullen, H. E. et al. (2016) ‘Phosphatidic Acid-Mediated Signaling Regulates Microneme Secretion in Toxoplasma Article Phosphatidic Acid-Mediated Signaling Regulates Microneme Secretion in Toxoplasma’, Cell Host & Microbe, pp. 349–360. doi: 10.1016/j.chom.2016.02.006.

      Dass, S. et al. (2021) ‘Toxoplasma LIPIN is essential in channeling host lipid fluxes through membrane biogenesis and lipid storage’, Nature Communications. Springer US, 12(1). doi: 10.1038/s41467-021-22956-w.

      Endo, T. et al. (1987) ‘Effects of Extracellular Potassium on Acid Release and Motility Initiation in Toxoplasma gondii’, The Journal of Protozoology, 34(3), pp. 291–295. doi: 10.1111/j.1550-7408.1987.tb03177.x.

      Flueck, C. et al. (2019) Phosphodiesterase beta is the master regulator of camp signalling during malaria parasite invasion, PLoS Biology. doi: 10.1371/journal.pbio.3000154.

      Howard, B. L. et al. (2015) ‘Identification of potent phosphodiesterase inhibitors that demonstrate cyclic nucleotide-dependent functions in apicomplexan parasites’, ACS Chemical Biology, 10(4), pp. 1145–1154. doi: 10.1021/cb501004q.

      Jia, Y. et al. (2017) ‘ Crosstalk between PKA and PKG controls pH ‐dependent host cell egress of Toxoplasma gondii ’, The EMBO Journal, 36(21), pp. 3250–3267. doi: 10.15252/embj.201796794.

      Katris, N. J. et al. (2020) ‘Rapid kinetics of lipid second messengers controlled by a cGMP signalling network coordinates apical complex functions in Toxoplasma tachyzoites’, bioRxiv. doi: 10.1101/2020.06.19.160341.

      Lentini, J. M. et al. (2020) ‘DALRD3 encodes a protein mutated in epileptic encephalopathy that targets arginine tRNAs for 3-methylcytosine modification’, Nature Communications. Springer US, 11(1). doi: 10.1038/s41467-020-16321-6.

      Lourido, S., Tang, K. and David Sibley, L. (2012) ‘Distinct signalling pathways control Toxoplasma egress and host-cell invasion’, EMBO Journal. Nature Publishing Group, 31(24), pp. 4524–4534. doi: 10.1038/emboj.2012.299.

      Lunghi, M. et al. (2022) ‘Pantothenate biosynthesis is critical for chronic infection by the neurotropic parasite Toxoplasma gondii’, Nature Communications. Springer US, 13(1). doi: 10.1038/s41467-022-27996-4.

      McCoy, J. M. et al. (2012) ‘TgCDPK3 Regulates Calcium-Dependent Egress of Toxoplasma gondii from Host Cells’, PLoS Pathogens, 8(12). doi: 10.1371/journal.ppat.1003066.

      Moss, W. J. et al. (2022) ‘Functional Analysis of the Expanded Phosphodiesterase Gene Family in Toxoplasma gondii Tachyzoites’, mSphere. American Society for Microbiology, 7(1). doi: 10.1128/msphere.00793-21.

      Stewart, R. J. et al. (2017) ‘Analysis of Ca2+ mediated signaling regulating Toxoplasma infectivity reveals complex relationships between key molecules’, Cellular Microbiology, 19(4). doi: 10.1111/cmi.12685.

      Vo, K. C. et al. (2020) ‘The protozoan parasite Toxoplasma gondii encodes a gamut of phosphodiesterases during its lytic cycle in human cells’, Computational and Structural Biotechnology Journal. The Author(s), 18, pp. 3861–3876. doi: 10.1016/j.csbj.2020.11.024.

    1. a crimp tree node for for each chunk of your file or directory and that can have links to the the encrypted file fragments and so the keys in this champ are 00:05:42 basically random um subsequent keys in a file are not random but they're still not deducible uh by the server so the storage Your Home Server can't figure out or can't link 00:05:54 the different chunks of the same file so we use that to hide the the tires of the file among among other ways the read correctly is pretty simple it's it's been discussed earlier but yeah 00:06:07 it's a tree of symmetric keys if you have one key you can follow the the arrows follow the links it also gives everything a well-defined path so if I just give you access to this file you can follow the parent links to get the names 00:06:21 so you have a path but you still can't see if there are any other files in that directory any siblings or anything like that the right tree is even simpler so there's just one key for each file or 00:06:34 directory these are all symmetric keys by the way in the previous slide um also the top ones are symmetric Keys these are obviously key pairs at the bottom um and the the metadata that we protect 00:06:49 file names file name sizes if you care about that uh the file sizes I've mentioned so there's a chunking part get you down to 00:07:00 modulo 5 Meg
      • about : cryptree !- main contribution : IndyWeb, IndyNet, PeerKeep, MindDrive, Web3.storage

      !- design comment : analogous intents IndyWeb

      instead of relying on cryptography relyh on interpersonal trusted communications full trusted audit trails

      uniform data access methods

      instead of getting files get associative complex neighbourhood with context specific inline articulation of illative/interpretative flows em logiccs

      instead of random keys human readable context/intent bearing attributed names resolvable by creator on request responding with access complex as attributed page with ephemeral encryption etc

    1. I've gotten to where "if I can't make sense of it and be productive in it with just vim, grep, and find, your code is too complex".

      See Too DRY - The Grep Test.

      But even allowing for grep is too lax, in my view. It's too high a cost. If I've got some file open and am looking at some Identifier X, I should be able to both deterministically and easily figure out exactly which file X lives in.

      Violating this is what sucks about C's text-pasting anti-module system, and it's annoying that Go's "package" system ends up causing similar problems. I shouldn't have to have a whole-project view. I should be able to just follow my nose.

    1. Author Response

      Reviewer #1 (Public Review):

      Jones et al. investigated the relationship between scale free neural dynamics and scale free behavioral dynamics in mice. An extensive prior literature has documented scale free events in both cortical activity and animal behavior, but the possibility of a direct correspondence between the two has not been established. To test this link, the authors took advantage of previously published recordings of calcium events in thousands of neurons in mouse visual cortex and simultaneous behavioral data. They find that scale free-ness in spontaneous behavior co occurs with scale free neuronal dynamics. The authors show that scale free neural activity emerges from subsets of the larger population - the larger population contains anticorrelated subsets that cancel out one another's contribution to population-level events. The authors propose an updated model of the critical brain hypothesis that accounts for the obscuring impact of large populations on nested subsets that generate scale free activity. The possibility that scale free activity, and specifically criticality, may serve as a unifying theory of brain organization has suffered from a lack of high-resolution connection between observations of neuronal statistics and brain function. By bridging theory, neural data, and behavioral dynamics, these data add a valuable contribution to fields interested in cortical dynamics and spontaneous behavior, and specifically to the intersection of statistical physics and neuroscience.

      Strengths:

      This paper is notably well written and thorough.

      The authors have taken a cutting-edge, high-density dataset and propose a data-driven revision to the status-quo theory of criticality. More specifically, due to the observed anticorrelated dynamics of large populations of neurons (which doesn't fit with traditional theories of criticality), the authors present a clever new model that reveals critical dynamics nested within the summary population behavior.

      The conclusions are supported by the data.

      Avalanching in subsets of neurons makes a lot of sense - this observation supports the idea that multiple, independent, ongoing processes coexist in intertwined subsets of larger networks. Even if this is wrong, it's supported well by the current data and offers a plausible framework on which scale free dynamics might emerge when considered at the levels of millions or billions of neurons.

      The authors present a new algorithm for power law fitting that circumvents issues in the KS test that is the basis of most work in the field.

      Weaknesses:

      This paper is technically sound and does not have major flaws, in my opinion. However, I would like to see a detailed and thoughtful reflection on the role that 3 Hz Ca imaging might play in the conclusions that the authors derive. While the dataset in question offers many neurons, this approach is, from other perspectives, impoverished - calcium intrinsically misses spikes, a 3 Hz sampling rate is two orders of magnitude slower than an action potential, and the recordings are relatively short for amassing substantial observations of low probability (large) avalanches. The authors carefully point out that other studies fail to account for some of the novel observations that are central to their conclusions. My speculative concern is that some of this disconnect may reflect optophysiological constraints. One argument against this is that a truly scale free system should be observable at any temporal or spatial scale and still give rise to the same sets of power laws. This quickly falls apart when applied to biological systems which are neither infinite in time nor space. As a result, the severe mismatch between the spatial resolution (single cell) and the temporal resolution (3 Hz) of the dataset, combined with filtering intrinsic to calcium imaging, raises the possibility that the conclusions are influenced by the methods. Ultimately, I'm pointing to an observer effect, and I do not think this disqualifies or undermines the novelty or potential value of this work. I would simply encourage the authors to consider this carefully in the discussion.

      R1a: We quite agree with the reviewer that reconciling different scales of measurement is an important and interesting question. One clue comes from Stringer et al’s original paper (2019 Science). They analyzed time-resolved spike data (from Neuropixel recordings) alongside the Ca imaging data we analyzed here. They showed that if the ephys spike data was analyzed with coarse time resolution (300 ms time bins, analogous to the Ca imaging data), then the anticorrelated activity became apparent (50/50 positive/negative loadings of PC1). When analyzed at faster time scales, anticorrelations were not apparent (mostly positive loadings of PC1). This interesting point was shown in their Supplementary Fig 12.

      This finding suggests that our findings about anticorrelated neural groups may be relevant only at coarse time scales. Moreover, this point suggests that avalanche statistics may differ when analyzed at very different time scales, because the cancelation of anticorrelated groups may not be an important factor at faster timescales.

      In our revised manuscript, we explored this point further by analyzing spike data from Stringer et al 2019. We focused on the spikes recorded from one local population (one Neuropixel probe). We first took the spike times of ~300 neurons and convolved them with a fast rise/slow fall, like typical Ca transient. Then we downsampled to 3 Hz sample rate. Next, we deconvolved using the same methods as those used by Stringer et al (OASIS nonnegative deconvolution). And finally, we z-scored the resulting activity, as we did with the Ca imaging data. With this Ca-like signal in hand, we analyzed avalanches in four ways and compared the results. The four ways were: 1) the original time-resolved spikes (5 ms resolution), 2) the original spikes binned at 330 ms time res, 3) the full population of slow Ca-like signal, and 4) a correlated subset of neurons from the slow Ca-like signal. Based on the results of this new analysis (now in Figs S3 and S4), we found several interesting points that help reconcile potential differences between fast ephys and slow Ca signals:

      1. In agreement with Sup Fig 12 from Stringer et al, anticorrelations are minimal in the fast, time-resolved spike data, but can be dominant in the slow, Ca-like signal.

      2. Avalanche size distributions of spikes at fast timescales can exhibit a nice power law, consistent with previous results with exponents near -2 (e.g. Ma et al Neuron 2019, Fontenele et al PRL 2019). But, the same data at slow time scales exhibited poor power-laws when the entire population was considered together.

      3. The slow time scale data could exhibit a better power law if subsets of neurons were considered, just like our main findings based on Ca imaging. This point was the same using coarse time-binned spike data and the slow Ca-like signals, which gives us some confidence that deconvolution does not miss too many spikes.

      In our opinion, a more thorough understanding of how scale-free dynamics differs across timescales will require a whole other paper, but we think these new results in our Figs S3 and S4 provide some reassurance that our results can be reconciled with previous work on scale free neural activity at faster timescales.

      Reviewer #2 (Public Review):

      The overall goal of the paper is to link spontaneous neural activity and certain aspects of spontaneous behavior using a publicly available dataset in which 10,000 neurons in mouse visual cortex were imaged at 3 Hz with single-cell resolution. Through careful analysis of the degree to which bouts of behavior and bouts of neural activity are described (or not) by power-law distributions, the authors largely achieve these goals. More specifically, the key findings are that (a) the size of bouts of whisking, running, eye movements, and pupil dilation are often well-fit by a power-law distribution over several decades, (b) subsets of neurons that are highly correlated with one of these behavioral metrics will also exhibit power-law distributed event sizes, (c) neuron clusters that are uncorrelated with behavior tend to not be scale-free, (d) crackling relationships are generally not found (i.e. size with duration exponent (if there is scaling) was not predicted by size power-law and duration power-law), (e) bouts of behavior could be linked to bouts of neural activity. In the second portion of the paper, the authors develop a computational model with sets of correlated and anti-correlated neurons, which can be accomplished under a relatively small subset of connection architectures: out of the hundreds of thousands of networks simulated, only 31 generated scale-free subsets/non-scale-free population/anti correlated e-cells/anti-correlated i-cells in agreement with the experimental recordings.

      The data analysis is careful and rigorous, especially in the attention to fitting power laws, determining how many decades of scaling are observed, and acknowledging when a power-law fit is not justified. In my view, there are two weaknesses of the paper, related to how the results connect to past work and to the set-up and conclusions drawn from the computational modeling, and I discuss those in detail below. While my comments are extensive, this is due to high interest. I do think that the authors make an important connection between scale-free distributions of neural activity and behavior, and that their use of computational modeling generates some interesting mechanistic hypotheses to explore in future work.

      My first general reservation is in the relationship to past work and the overall novelty. The authors state in the introduction, "according to the prevailing view, scale-free ongoing neural activity is interpreted as 'background' activity, not directly linked to behavior." It would be helpful to have some specific references here, as several recent papers (including the Stringer et al. 2019 paper from which these data were taken, but also papers from McCormick lab and (Anne) Churchland lab) showed a correlation between spontaneous activity and spontaneous facial behaviors. To my knowledge, the sorts of fidgety behavior analyzed in this paper have not been shown to be scale-free, and so (a) is a new result, but once we know this, it seems that (e) follows because we fully expect some neurons to correlate with some behavior.

      R2a: We agree with the reviewer that our original introductory, motivating arguments needed improvement. We have now rewritten the last 2 paragraphs of the introduction. We hope we have now laid out our argument more clearly, with more appropriate supporting citations. In brief, the logic is this:

      1. Previous theory, modeling, and experiments on the topic of scale-free neural activity suggest that this phenomenon is an autonomous, internally generated thing, independent of anything the body is doing.

      2. Relatively new experiments (including those by Churchland’s lab and McCormmick’s lab: Stringer 2019; Salkoff 2020; Clancy 2019; Musall 2019) suggest a different picture with a link between spontaneous behaviors and ongoing cortical activity, but these studies did not address any questions about scale-free-ness.

      3. Moreover, these new experiments show that behavioral variables only manage to explain about 10-30% of ongoing activity.

      4. Is this behaviorally-explainable 10-30% scale-free or perhaps the scale-free aspects of cortical dynamics fall withing the other 70-90%. Our goal is to find out.

      Digging a bit more on this issue, I would argue that results (b) and (c) also follow. By selecting subsets of neurons with very high cross-correlation, an effective latent variable has emerged. For example, the activity rasters of these subsets are similar to a population in which each neuron fires with the same time-varying rate (i.e., a heterogeneous Poisson process). Such models have been previously shown to be able to generate power-law distributed event sizes (see, eg., Touboul and Destexhe, 2017; also work by Priesemann). With this in mind, if you select from the entire population a set of neurons whose activity is effectively determined by a latent variable, do you not expect power laws in size distributions?

      Our understanding is that not all Poisson processes with a time-varying rate will result in a power law. It is quite essential that the fluctuations in rate must themselves be power-law distributed. As a clear example of how this breaks down, consider a Poisson rate that varies according to a sine wave with fixed period and amplitude. In this case, the avalanche size distribution is definitely not scale-free, it would have a clear typical scale. Another point of view on this comes from some of the simplest models used to study criticality – e.g. all-to-all connected probabilistic binary neurons (like in Shew et al 2009 J Neurosi). These models do generate spiking with a time-varying Poisson rate when they are at criticality or away from criticality. But, only when the synaptic strength is tuned to criticality is the time-varying rate going to generate power-law distributed avalanches. I think the Priesmann & Shriki paper made this point as well.

      My second reservation has to do with the generality of the conclusions drawn from the mechanistic model. One of the connectivity motifs identified appears to be i+ to e- and i- to e+, where potentially i+/i- are SOM and VIP (or really any specific inhibitory type) cells. The specific connections to subsets of excitatory cells appear to be important (based on the solid lines in Figure 8). This seems surprising: is there any experimental support for excitatory cells to preferentially receive inhibition from either SOM or VIP, but not both?

      R2b: There is indeed direct experimental support for the competitive relationship between SOM, VIP, and functionally distinct groups of excitatory neurons. This was shown in the paper by Josh Trachtenberg’s group: Garcia-Junco-Clemente et al 2017. An inhibitory pull-push circuit in frontal cortex. Nat Neurosci 20:389–392. However, we emphasize that we also showed (lower left motif in Fig 8G) that a simpler model with only one inhibitory group is sufficient to explain the anticorrelations and scale-free dynamics we observe. We opted to highlight the model with two inhibitory groups since it can also account for the Garcia-Junco-Clemente et al results.

      In the section where we describe the model, we state, “We considered two inhibitory groups, instead of just one, to account for previous reports of anticorrelations between VIP and SOM inhibitory neurons in addition to anticorrelations between groups of excitatory neurons (Garcia-Junco-Clemente et al., 2017).”

      More broadly, I wonder if the neat diagrams drawn here are misleading. The sample raster, showing what appears to be the full simulation, certainly captures the correlated/anti-correlated pattern of the 100 cells most correlated with a seed cell and 100 cells most anti-correlated with it, but it does not contain the 11,000 cells in between with zero to moderate levels of correlation.

      R2c: We agree that our original model has several limitations and that one of the most obvious features lacking in our model is asynchronous neurons (The limitations are now discussed more openly in the last paragraph of the model subsection). In the data from the Garcia-Junco-Clemente et al paper above there are many asynchronous neurons as well. To ameliorate this limitation, we have now created a modified model that now accounts for asynchronous neurons together with the competing anticorrelated neurons (now shown and described in Fig S9). We put this modified model in supplementary material and kept the simpler, original model in the main findings of our work, because the original model provides a simpler account of the features of the data we focused on in our work – i.e. anticorrelated scale-free fluctuations. The addition of the asynchronous population does not substantially change the behavior of the two anticorrelated groups in the original model.

      We probably expect that the full covariance matrix has similar structure from any seed (see Meshulam et al. 2019, PRL, for an analysis of scaling of coarse-grained activity covariance), and this suggests multiple cross-over inhibition constraints, which seem like they could be hard to satisfy.

      R2d: We agree that it remains an outstanding challenge to create a model that reproduces the full complexity of the covariance matrix. We feel that this challenge is beyond the scope of this paper, which is already arguably squeezing quite a lot into one manuscript (one reviewer already suggested removing figures!).

      We added a paragraph at the end of the subsection about the model to emphasize this limitation of the model as well as other limitations. This new paragraph says:

      While our model offers a simple explanation of anticorrelated scale-free dynamics, its simplicity comes with limitations. Perhaps the most obvious limitation of our model is that it does not include neurons with weak correlations to both e+ and e- (those neurons in the middle of the correlation spectrum shown in Fig 7B). In Fig S9, we show that our model can be modified in a simple way to include asynchronous neurons. Another limitation is that we assumed that all non-zero synaptic connections were equal in weight. We loosen this assumption allowing for variable weights in Fig S9, without changing the basic features of anticorrelated scale-free fluctuations. Future work might improve our model further by accounting for neurons with intermediate correlations.

      The motifs identified in Fig. 8 likely exist, but I am left with many questions of what we learned about connectivity rules that would account for the full distribution of correlations. Would starting with an Erdos-Renyi network with slight over-representation of these motifs be sufficient? How important is the homogeneous connection weights from each pool assumption - would allowing connection weights with some dispersion change the results?

      R2e: First, we emphasize that our specific goal with our model was to identify a possible mechanism for the anticorrelated scale-free fluctuations that played the key role in our analyses. We agree that this is not a complete account of all correlations, but this was not the goal of our work. Nonetheless, our new modified model in Fig S9 now accounts for additional neurons with weak correlations. However, we think that future theoretical/modeling work will be required to better account for the intermediate correlations that are also present in the experimental data.

      We confirmed that an Erdo-Renyi network of E and I neurons can produce scale-free dynamics, but cannot produce substantial anticorrelated dynamics (Fig 8G, top right motif). Additionally, the parameter space study we performed with our model in Fig 8 showed that if the interactions between the two excitatory groups exceed a certain tipping point density, then the model behavior switches to behavior expected from an Erdos-Renyi network (Fig 8F). Finally, we have now confirmed that some non-uniformity of synaptic weights does not change the main results (Fig S9). In the model presented in Fig S9, the value of each non-zero connection weight was drawn from a uniform distribution [0,0.01] or [-0.01,0] for excitatory and inhibitory connections, respectively. All of these facts are described in the model subsection of the paper results.

      As a whole, this paper has the potential to make an impact on how large-scale neural and behavioral recordings are analyzed and interpreted, which is of high interest to a large contingent of the field.

      Reviewer #3 (Public Review):

      The primary goal of this work is to link scale free dynamics, as measured by the distributions of event sizes and durations, of behavioral events and neuronal populations. The work uses recordings from Stringer et al. and focus on identifying scale-free models by fitting the log-log distribution of event sizes. Specifically, the authors take averages of correlated neural sub-populations and compute the scale-free characterization. Importantly, neither the full population average nor random uncorrelated subsets exhibited scaling free dynamics, only correlated subsets. The authors then work to relate the characterization of the neuronal activity to specific behavioral variables by testing the scale-free characteristics as a function of correlation with behavior. To explain their experimental observation, the authors turn to classic e-i network constructions as models of activity that could produce the observed data. The authors hypothesize that a winner-take-all e-i network can reproduce the activity profiles and therefore might be a viable candidate for further study. While well written, I find that there are a significant number of potential issues that should be clarified. Primarily I have main concerns: 1) The data processing seems to have the potential to distort features that may be important for this analysis (including missed detections and dynamic range), 2) The analysis jumps right to e-i network interactions, while there seems to be a much simpler, and more general explanation that seems like it could describe their observations (which has to do with the way they are averaging neurons), and 3) that the relationship between the neural and behavioral data could be further clarified by accounting for the lop-sidedness of the data statistics. I have included more details below about my concerns below.

      Main points:

      1) Limits of calcium imaging: There is a large uncertainty that is not accounted for in dealing with smaller events. In particular there are a number of studies now, both using paired electro-physiology and imaging [R1] and biophysical simulations [R2] that show that for small neural events are often not visible in the calcium signal. Moreover, this problem may be exacerbated by the fact that the imaging is at 3Hz, much lower than the more typical 10-30Hz imaging speeds. The effects of this missing data should be accounted for as could be a potential source of large errors in estimating the neural activity distributions.

      R3a: We appreciate the concern here and agree that event size statistics could in principle be biased in some systematic way due to missed spikes due to deconvolution of Ca signals. To directly test this possibility, we performed a new analysis of spike data recorded with high time resolution electrophysiology. We began with forward-modeling process to create a low-time-resolution, Ca-like signal, using the same deconvolution algorithm (OASIS) that was used to generate the data we analyzed in our work here. In agreement with the reviewer’s concern, we found that spikes were sometimes missed, but the loss was not extreme and did not impact the neural event size statistics in a significant way compared to the ground truth we obtained directly from the original spike data (with no loss of spikes). This new work is now described in a new paragraph at the end of the subsection of results related to Fig 3 and in a new Fig S3. The new paragraph says…

      Two concerns with the data analyzed here are that it was sampled at a slow time scale (3 Hz frame rate) and that the deconvolution methods used to obtain the data here from the raw GCAMP6s Ca imaging signals are likely to miss some activity (Huang et al., 2021). Since our analysis of neural events hinges on summing up activity across neurons, could it be that the missed activity creates systematic biases in our observed event size statistics? To address this question, we analyzed some time-resolved spike data (Neuropixel recording from Stringer et al 2019). Starting from the spike data, we created a slow signal, similar to that we analyzed here by convolving with a Ca-transient, down sampling, deconvolving, and z-scoring (Fig S3). We compared neural event size distributions to “ground truth” based on the original spike data (with no loss of spikes) and found that the neural event size distributions were very similar, with the same exponent and same power-law range (Fig S3). Thus, we conclude that our reported neural event size distributions are reliable.

      However, although loss of spikes did not impact the event size distributions much, the time-scale of measurement did matter. As discussed above and shown in Fig S4, changing from 5 ms time resolution to 330 ms time resolution does change the exponent and the range of the power law. However, in the test data set we worked with, the existence of a power law was robust across time scales.

      2) Correlations and power-laws in subsets. I have a number of concerns with how neurons are selected and partitioned to achieve scale-free dynamics. 2a) First, it's unclear why the averaging is required in the first place. This operation projects the entire population down in an incredibly lossy way and removes much of the complexity of the population activity.

      R3b: Our population averaging approach is motivated by theoretical predictions and previous work. According to established theoretical accounts of scale-free population events (i.e. non-equilibrium critical phenomena in neural systems) such population-summed event sizes should have power law statistics if the system is near a critical point. This approach has been used in many previous studies of scale-free neural activity (e.g. all of those cited in the introduction in relation to scale-free neuronal avalanches). One of the main results of our study is that the existing theories and models of critical dynamics in neural systems fail to account for small subsets of neurons with scale-free activity amid a larger population that does not conform to these statistics. We could not make this conclusion if we did not test the predictions of those existing theories and models.

      2b) Second, the authors state that it is highly curious that subsets of the population exhibit power laws while the entire population does not. While the discussion and hypothesizing about different e-i interactions is interesting I believe that there's a discussion to be had on a much more basic level of whether there are topology independent explanations, such as basic distributions of correlations between neurons that can explain the subnetwork averaging. Specifically, if the correlation to any given neuron falls off, e.g., with an exponential falloff (i.e., a Gaussian Process type covariance between neurons), it seems that similar effects should hold. This type of effect can be easily tested by generating null distributions using code bases such as [R3]. I believe that this is an important point, since local (broadly defined) correlations of neurons implying the observed subnetwork behavior means that many mechanisms that have local correlations but don't cluster in any meaningful way could also be responsible for the local averaging effect.

      R3c: We appreciate the reviewer’s effort, trying out some code to generate a statistical model. We agree that we could create such a statistical model that describes the observed distribution of pairwise correlations among neurons. For instance, it would be trivial to directly measure the covariance matrix, mean activities, and autocorrelations of the experimental data, which would, of course, provide a very good statistical description of the data. It would also be simple to generate more approximate statistical descriptions of the data, using multivariate gaussians, similar to the code suggested by the reviewer. However, we emphasize, this would not meet the goal of our modeling effort, which is mechanistic, not statistical. The aim of our model was to identify a possible biophysical mechanism from which emerge certain observed statistical features of the data. We feel that a statistical model is not a suitable strategy to meet this aim. Nonetheless, we agree with the reviewer that clusters with sharp boundaries (like the distinction between e+ an e- in our model) are not necessary to reproduce the cancelation of anticorrelated neurons. In other words, we agree that sharp boundaries of the e+ and e- groups of our model are not crucial ingredients to match our observations.

      2c) In general, the discussion of "two networks" seems like it relies on the correlation plot of Figure~7B. The decay away from the peak correlation is sharp, but there does not seem to be significant clustering in the anti-correlation population, instead a very slow decay away from zero. The authors do not show evidence of clustering in the neurons, nor any biophysical reason why e and i neurons are present in the imaging data.

      R3d: First a small reminder: As stated in the paper, the data here is only showing activity of excitatory neurons. Inhibitory neurons are certainly present in V1, but they are not recorded in this data set. Thus we interpret our e+ and e- groups as two subsets of anticorrelated excitatory neurons, like those we observed in the experimental data. We agree that our simplified model treats the anticorrelated subsets as if they are clustered, but this clustering is certainly not required for any of the data analyses of experimental data. We expect that our model could be improved to allow for a less sharp boundary between e+ and e- groups, but we leave that for future work, because it is not essential to most of the results in the paper. This limitation of the model is now stated clearly in the last paragraph of the model subsection.

      The alternative explanation (as mentioned in (b)) is that the there is a more continuous set of correlations among the neurons with the same result. In fact I tested this myself using [R3] to generate some data with the desired statistics, and the distribution of events seems to also describe this same observation. Obviously, the full test would need to use the same event identification code, and so I believe that it is quite important that the authors consider the much more generic explanation for the sub-network averaging effect.

      R3e: As discussed above, we respectfully disagree that a statistical model is an acceptable replacement for a mechanistic model, since we are seeking to understand possible biophysical mechanisms. A statistical model is agnostic about mechanisms. We have nothing against statistical models, but in this case, they would not serve our goals.

      To emphasize our point about the inadequacy of a statistical model for our goals, consider the following argument. Imagine we directly computed the mean activities, covariance matrix, and autocorrelations of all 10000 neurons from the real data. Then, we would have in hand an excellent statistical model of the data. We could then create a surrogate data set by drawing random numbers from a multivariate gaussian with same statistical description (e.g. using code like that offered by reviewer 3). This would, by construction, result in the same numbers of correlated and anticorrelated surrogate neurons. But what would this tell us about the biophysical mechanisms that might underlie these observations? Nothing, in our opinion.

      2d) Another important aspect here is how single neurons behave. I didn't catch if single neurons were stated to exhibit a power law. If they do, then that would help in that there are different limiting behaviors to the averaging that pass through the observed stated numbers. If not, then there is an additional oddity that one must average neurons at all to obtain a power law.

      R3f: We understand that our approach may seem odd from the point of view of central-limit-theorem-type argument. However, as mentioned above (reply R3b) and in our paper, there is a well-established history of theory and corresponding experimental tests for power-law distributed population events in neural systems near criticality. The prediction from theory is that the population summed activity will have power-law distributed events or fluctuations. That is the prediction that motivates our approach. In these theories, it is certainly not necessary that individual neurons have power-law fluctuations on their own. In most previous theories, it is necessary to consider the collective activity of many neurons before the power-law statistics become apparent, because each individual neurons contributes only a small part to the emergent, collective fluctuations. This phenomenon does not require that each individual neuron have power-law fluctuations.

      At the risk of being pedantic, we feel obliged to point out that one cannot understand the peculiar scale-free statistics that occur at criticality by considering the behavior of individual elements of the system; hence the notion that critical phenomena are “emergent”. This important fact is not trivial and is, for example, why there was a Nobel prize awarded in physics for developing theoretical understanding of critical phenomena.

      3) There is something that seems off about the range of \beta values inferred with the ranges of \tau and $\alpha$. With \tau in [0.9,1.1], then the denominator 1-\tau is in [-0.1, 0.1], which the authors state means that \beta (found to be in [2,2.4]) is not near \beta_{crackling} = (\alpha-1)/(1-\tau). It seems as this is the opposite, as the possible values of the \beta_{crackling} is huge due to the denominator, and so \beta is in the range of possible \beta_{crackling} almost vacuously. Was this statement just poorly worded?

      R3g: The point here is that theory of crackling noise predicts that the fit value of beta should be equal to (1-alpha)/(1-tau). In other words, a confirmation of the theory would have all the points on the unity line in the rightmost panels of Fig9D and 9E, not scattered by more than an order of magnitude around the unity line. (We now state this explicitly in the text where Fig 9 is discussed.) Broad scatter around the unity line means the theory prediction did not hold. This is well established in previous studies of scale-free brain dynamics and crackling noise theory (see for example Ma et al Neuron 2019, Shew et al Nature Physics 2015, Friedman et al PRL 2012). A clearer single example of the failure of the theory to predict beta is shown in Fig 5A,B, and C.

      4) Connection between brain and behavior:

      4a) It is not clear if there is more to what the authors are trying to say with the specifics of the scale free fits for behavior. From what I can see those results are used to motivate the neural studies, but aside from that the details of those ranges don't seem to come up again.

      R3h: The reviewer is correct, the primary point in Fig 2 is that scale-free behavioral statistics often exist. Beyond this point about existence, reporting of the specific exponents and ranges is just standard practice for this kind of analysis; a natural question to ask after claiming that we find scale behavior is “what are the exponents and ranges”. We would be remiss not to report those numbers.

      4b) Given that the primary connection between neuronal and behavioral activity seems to be Figure~4. The distribution of points in these plots seem to be very lopsided, in that some plots have large ranges of few-to-no data points. It would be very helpful to get a sense of the distribution of points which are a bit hard to see given the overlapping points and super-imposed lines.

      R3i: We agree that this whitespace in the figure panels is a somewhat awkward, but we chose to keep the horizontal axis the same for all panels of Fig 4B, because this shows that not all behaviors, and not all animals had the same range of behavioral correlations. We felt that hiding this was a bit misleading, so we kept the white space.

      4c) Neural activity correlated with some behavior variables can sometimes be the most active subset of neurons. This could potentially skew the maximum sizes of events and give behaviorally correlated subsets an unfair advantage in terms of the scale-free range.

    2. Reviewer #3 (Public Review):

      The primary goal of this work is to link scale free dynamics, as measured by the distributions of event sizes and durations, of behavioral events and neuronal populations. The work uses recordings from Stringer et al. and focus on identifying scale-free models by fitting the log-log distribution of event sizes. Specifically, the authors take averages of correlated neural sub-populations and compute the scale-free characterization. Importantly, neither the full population average nor random uncorrelated subsets exhibited scaling free dynamics, only correlated subsets. The authors then work to relate the characterization of the neuronal activity to specific behavioral variables by testing the scale-free characteristics as a function of correlation with behavior. To explain their experimental observation, the authors turn to classic e-i network constructions as models of activity that could produce the observed data. The authors hypothesize that a winner-take-all e-i network can reproduce the activity profiles and therefore might be a viable candidate for further study. While well written, I find that there are a significant number of potential issues that should be clarified. Primarily I have main concerns: 1) The data processing seems to have the potential to distort features that may be important for this analysis (including missed detections and dynamic range), 2) The analysis jumps right to e-i network interactions, while there seems to be a much simpler, and more general explanation that seems like it could describe their observations (which has to do with the way they are averaging neurons), and 3) that the relationship between the neural and behavioral data could be further clarified by accounting for the lop-sidedness of the data statistics. I have included more details below about my concerns below.

      Main points:<br /> 1)Limits of calcium imaging: There is a large uncertainty that is not accounted for in dealing with smaller events. In particular there are a number of studies now, both using paired electro-physiology and imaging [R1] and biophysical simulations [R2] that show that for small neural events are often not visible in the calcium signal. Moreover, this problem may be exacerbated by the fact that the imaging is at 3Hz, much lower than the more typical 10-30Hz imaging speeds. The effects of this missing data should be accounted for as could be a potential source of large errors in estimating the neural activity distributions.

      2) Correlations and power-laws in subsets. I have a number of concerns with how neurons are selected and partitioned to achieve scale-free dynamics.<br /> 2a) First, it's unclear why the averaging is required in the first place. This operation projects the entire population down in an incredibly lossy way and removes much of the complexity of the population activity.<br /> 2b) Second, the authors state that it is highly curious that subsets of the population exhibit power laws while the entire population does not. While the discussion and hypothesizing about different e-i interactions is interesting I believe that there's a discussion to be had on a much more basic level of whether there are topology independent explanations, such as basic distributions of correlations between neurons that can explain the subnetwork averaging. Specifically, if the correlation to any given neuron falls off, e.g., with an exponential falloff (i.e., a Gaussian Process type covariance between neurons), it seems that similar effects should hold. This type of effect can be easily tested by generating null distributions using code bases such as [R3]. I believe that this is an important point, since local (broadly defined) correlations of neurons implying the observed subnetwork behavior means that many mechanisms that have local correlations but don't cluster in any meaningful way could also be responsible for the local averaging effect.<br /> 2c) In general, the discussion of "two networks" seems like it relies on the correlation plot of Figure~7B. The decay away from the peak correlation is sharp, but there does not seem to be significant clustering in the anti-correlation population, instead a very slow decay away from zero. The authors do not show evidence of clustering in the neurons, nor any biophysical reason why e and i neurons are present in the imaging data. The alternative explanation (as mentioned in (b)) is that the there is a more continuous set of correlations among the neurons with the same result. In fact I tested this myself using [R3] to generate some data with the desired statistics, and the distribution of events seems to also describe this same observation. Obviously, the full test would need to use the same event identification code, and so I believe that it is quite important that the authors consider the much more generic explanation for the sub-network averaging effect.<br /> 2d) Another important aspect here is how single neurons behave. I didn't catch if single neurons were stated to exhibit a power law. If they do, then that would help in that there are different limiting behaviors to the averaging that pass through the observed stated numbers. If not, then there is an additional oddity that one must average neurons at all to obtain a power law.

      3) There is something that seems off about the range of \beta values inferred with the ranges of \tau and $\alpha$. With \tau in [0.9,1.1], then the denominator 1-\tau is in [-0.1, 0.1], which the authors state means that \beta (found to be in [2,2.4]) is not near \beta_{crackling} = (\alpha-1)/(1-\tau). It seems as this is the opposite, as the possible values of the \beta_{crackling} is huge due to the denominator, and so \beta is in the range of possible \beta_{crackling} almost vacuously. Was this statement just poorly worded?

      4) Connection between brain and behavior:<br /> 4a) It is not clear if there is more to what the authors are trying to say with the specifics of the scale free fits for behavior. From what I can see those results are used to motivate the neural studies, but aside from that the details of those ranges don't seem to come up again.<br /> 4b) Given that the primary connection between neuronal and behavioral activity seems to be Figure~4. The distribution of points in these plots seem to be very lopsided, in that some plots have large ranges of few-to-no data points. It would be very helpful to get a sense of the distribution of points which are a bit hard to see given the overlapping points and super-imposed lines.<br /> 4c) Neural activity correlated with some behavior variables can sometimes be the most active subset of neurons. This could potentially skew the maximum sizes of events and give behaviorally correlated subsets an unfair advantage in terms of the scale-free range.

    1. Flazrael here ...[edit] Hey, this is https://commons.wikimedia.org/wiki/User_talk:Damonthesis signing back in with a new account -- I really hope you all give second chances because this is my "first attempt" I never really had any "sock puppets" or other accounts, it's just that one and I was using it just to make "silly little edits" and ensure the history linked a set of mind control related pages to religion and myself. This is that, continuing. Forever in the Swiki "permalog" of the Wikipedia DVD's and "our English idea" the series on how and where: - https://en.wikipedia.org/wiki/Mirror_image_rule - https://www.google.com/search?rlz=1CABBMB_enUS988US988&sxsrf=AOaemvKOVzk5Mjc0HVVfnzIJg7fI5ytXqw%3A1642366913912&lei=wYfkYaCKN6WbwbkPr8SyqAc&q=mirror%20image%20rule&ved=2ahUKEwjg7vnDlbf1AhWlTTABHS-iDHUQsKwBKAB6BAg7EAE&biw=1517&bih=702&dpr=0.9 We have some other issues to deal with, I think this specific page is missing information related to the length of time Azmogod was "left in Hell" and it was something on the order of millions of years if I remember correctly. It could have been hundreds, that's my best effort at recollection, I'll search the logs for "years" in a little bit. I am very concerned. Obviously this "is me" ... this specific page, and it should definately link directly to Damonthesis and ... "Adam" I have some work to do. The mirror list and tunneling or gophering through the series discusses a significant issue. There are a number of broken German mirrors and no other country appears interested in saving the information on Wikipedia. Nobody is "doing the right thing" and attempting to help us build a better moderation system and ensure that we have a compilation of the world's knowledge on par with World Book and Brittanica. - https://www.britannica.com/ That's basically today's Merck manual on the illogical idiocy regarding the DSM-V and DSM-IV ... and I mean, it hasn't really been kept up. I personally believe I have a 90's era World Book that I read from regarding Clinton and Oxford; so those things are concrete, I was reading about Rhodes scholarship as he was the sitting president--and is a Rhodes scholar (afaik). 21:07, 16 January 2022 (UTC) — Preceding unsigned comment added by Lazraegrailf (talk • contribs) <img src="//en.wikipedia.org/wiki/Special:CentralAutoLogin/start?type=1x1" alt="" title="" width="1" height="1" style="border: none; position: absolute;" /> Retrieved from "https://en.wikipedia.org/w/index.php?title=Talk:Asmodeus&oldid=1074963938"

      Flazrael here ... Hey, this is https://commons.wikimedia.org/wiki/User_talk:Damonthesis signing back in with a new account -- I really hope you all give second chances because this is my "first attempt" I never really had any "sock puppets" or other accounts, it's just that one and I was using it just to make "silly little edits" and ensure the history linked a set of mind control related pages to religion and myself.

      This is that, continuing. Forever in the Swiki "permalog" of the Wikipedia DVD's and "our English idea" the series on how and where:

      We have some other issues to deal with, I think this specific page is missing information related to the length of time Azmogod was "left in Hell" and it was something on the order of millions of years if I remember correctly. It could have been hundreds, that's my best effort at recollection, I'll search the logs for "years" in a little bit.

      I am very concerned. Obviously this "is me" ... this specific page, and it should definately link directly to Damonthesis and ... "Adam"

      I have some work to do.

      The mirror list and tunneling or gophering through the series discusses a significant issue. There are a number of broken German mirrors and no other country appears interested in saving the information on Wikipedia. Nobody is "doing the right thing" and attempting to help us build a better moderation system and ensure that we have a compilation of the world's knowledge on par with World Book and Brittanica.

      That's basically today's Merck manual on the illogical idiocy regarding the DSM-V and DSM-IV ... and I mean, it hasn't really been kept up. I personally believe I have a 90's era World Book that I read from regarding Clinton and Oxford; so those things are concrete, I was reading about Rhodes scholarship as he was the sitting president--and is a Rhodes scholar (afaik).

      21:07, 16 January 2022 (UTC) — Preceding unsigned comment added by Lazraegrailf (talk • contribs)

    1. And that’s what the series argued, that the failure of the project didn’t mean that science was “bad,” it’s just that there are certain areas that it cannot be applied to, above all the chaotic and dynamic world of politics and history. But that’s not how the postwar generation took the failure. Two very powerful groups in the West who you would have thought were totally different in outlook—the conservatives and the liberal hippies—both reacted to this failure in a very similar way. They said, well, this means that you can’t plan anything—science is wrong and rationality is wrong. The liberals then sit there and say, “oh dear”, or retreat into mysticism, while the conservatives then grabbed the initiative and said, well, all you can really do is allow the free market to flourish and order will come out of that. And then, in the 1990s, a surprising number of the hippies joined the conservatives in this—especially in Silicon Valley. But I think this is wrong. In the end, I think rationality’s all you’ve got to work with. It’s just that there are areas where you can’t apply it.

      Adam Curtis, in an interview

    1. Well the Letter saw that idea coming, and what I love so much about it is the fact that is wants married women to consider that “[n]ow you are married, and have descended so low from so high – from the likeness of angels, from the beloved of Jesus Christ, from a lady in heaven, into carnal filth, into the life of an animal, into servitude to a man, and into the world’s misery”.[3] There it is – servitude to a man. Sure, you might enjoy having kids and all that, but ultimately “whatever advantage or happiness comes of it, it is too dearly bought”.[4]

      I won't pretend that I know the actual history they purport to tell, but it's interesting to consider the proto-feminism of hagiographies like this, just as texts. Woman becomes Christian; woman's father wants to sell her off; woman rejects this fate and either does or doesn't die as a result.

      I wonder what the historical balance throughout the centuries was of women being shoved into convents when they didn't want to go versus women running off to convents when society wanted them to do something else.

    1. It’s just too easy to accidentally give a good result to a controversial topic, and have the law makers pounce on you.

      Yes, if there's anything that defines the modern internet it's detailed legislation

    1. So we can’t point to specifics very easily and say, ah, that one maintained its place in culture and helps define queer people going forward. It’s all gotten lost. We just have indicators of this art form.

      This also resonates with my desire to want to document and explore weekday drag as lens with which we can look at the impact of drag on culture and the impact that has on people creating it - not just a weekend fantasy, but something that's around every corner, etc.

    2. I think that’s what’s interesting—you’ll encounter stories of individual performers that makes them seem exceptional, but when you get beyond the Great Man theory of history or whatever, and you start looking for people—once you make an effort to look for, like you said, that conceptual space, people will start popping up in much greater numbers than the narrative teaches us to expect.

      Love the phrasing of this question, b/c it prompts you to de-center the idea of one person being that to the the various ways different people contribute to this energy/movement, etc. not just one Zora, but Zoras, Andres, Angelas, etc. It's why community is so important. You're all contributing to something

    1. There's also a good chance the DNP encourages people to spend non-significant amounts of time journaling and writing notes they never look back on.

      While writing notes into a daily note page may be useful to give them a quick place to live, a note that isn't revisited is likely one that shouldn't have been made at all.

      Tools for thought need to do a better job of staging ideas for follow and additional work. Leaving notes orphaned on a daily notes page may help in the quick capture process, but one needs reminders about them, means of finding them, and potential means of improving them.

      If they're swept away continuously, then they only serve the sort of functionality of cleaning out of ideas that morning pages do. It's bad enough to have a massive scrap heap that looks and feels like work, but it's even worse to have it spread out among hundreds or thousands of separate files.

      Does digital search fix this issue entirely? Or does it just push off the work to later when it won't be done either.

    2. Half the time I begin typing something, I'm not even sure what I'm writing yet. Writing it out is an essential part of thinking it out. Once I've captured it, re-read it, and probably rewritten it, I can then worry about what to label it, what it connects to, and where it should 'live' in my system.

      One of my favorite things about Hypothes.is is that with a quick click, I've got a space to write and type and my thinking is off to the races.

      Sometimes it's tacitly (linked) attached to another idea I've just read (as it is in this case), while other times it's not. Either way, it's coming out and has at least a temporary place to live.

      Later on, my note is transported (via API) from Hypothes.is to my system where I can figure out how to modify it or attach it to something else.

      This one click facility is dramatically important in reducing the friction of the work for me. I hate systems which require me to leave what I'm doing and opening up another app, tool, or interface to begin.

    1. So, in this solution Rex:: namespace is not defining anything on its own, rather it’s just accessing the elements defined in the Msf:: namespace.

      Actually technically speaking its sending the proc so that Msf namespace does the work on its behalf thus preventing the namespace issues. You haven't really explained it this well by this point and so you likely will want to clarify this point here.

    1. Reviewer #1 (Public Review):

      Liau and colleagues have previously reported an approach that uses PAM-saturating CRISPR screens to identify mechanisms of resistance to active site enzyme inhibitors, allosteric inhibitors, and molecular glue degraders. Here, Ngan et al report a PAM-saturating CRISPR screen for resistance to the hypomethylating agent, decitabine, and focus on putatively allosteric regulatory sites. Integrating multiple computational approaches, they validate known - and discover new - mechanisms that increase DNMT1 activity. The work described is of the typical high quality expected from this outstanding group of scientists, but I find several claims to be slightly overreaching.

      Major points:

      The paper is presented as a new method - activity-based CRISPR scanning - to identify allosteric regulatory sites using DNMT1 as a proof-of-concept. Methodologically, the key differentiating feature from past work is that the inhibitor being used is an activity-based substrate analog inhibitor that forms a covalent adduct with the enzyme. I find the argument that this represents a new method for identifying allosteric sites to be relatively unconvincing and I would have preferred more follow-up of the compelling screening hits instead. The basic biology of DNMT1 and the translational relevance of decitabine resistance are undoubtedly of interest to researchers in diverse fields.

      In contrast, I am unconvinced that there is any qualitative or quantitative difference in the insights that can be derived from "activity-based CRISPR scanning" (using an activity-based inhibitor) compared to their standard "CRISPR suppressor scanning" (not using an activity-based inhibitor). Key to their argument, which is expanded upon at length in the manuscript, is that decitabine - being an activity-based inhibitor that only differs from the substrate by 2 atoms - will enrich for mutations in allosteric sites versus orthosteric sites because it will be more difficult to find mutations that selectively impact analog binding than it is for other active-site inhibitors. However, other work from this group clearly shows that non-activity-based allosteric and orthosteric inhibitors can just as easily identify resistance mutations in allosteric sites distal from the active site of an enzyme (https://www.biorxiv.org/content/10.1101/2022.04.04.486977v1). If the authors had compared their decitabine screen to a reversible DNMT1 inhibitor, such as GSK3685032, and found that decitabine was uniquely able to identify resistance mutations in allosteric sites, then I would be convinced. But with the data currently available, I see no reason to conclude that "activity-based CRISPR scanning" biases for different functional outcomes compared to the "CRISPR suppressor scanning" approach.

      How can LOF mutations from cluster 2 be leading to drug resistance? It is speculated in the paper that a change in gene dosage decreases the DNA crosslinks that cause toxicity. However, the immediate question then would be why do the resistance mutations cluster around the catalytic site? If it's just gene dosage from LOF editing outcomes, would you not expect the effect to occur more or less equally across the entire CDS?

      In general, I found the screens, and integrative analyses, highly compelling. But the follow-up was rather narrow. For example, how much do these mutations shift the IC50 curves for DAC? What kinetic parameters have changed to increase catalytic activity? Do the mutants with increased catalytic activity alter the abundance of methylated DNA (naively or in response to the drug)? It is speculated that several UHRF1 sgRNAs disrupt PPIs and not DNA binding, but this is never tested.

    1. Reviewer #3 (Public Review):

      The main goals of this study by Guan, Aflalo and colleagues were to examine the encoding scheme of populations of neurons in the posterior parietal cortex (PPC) of a person with paralysis while she attempted individual finger movements as part of a brain-computer interface task (BCI). They used these data to answer several questions:

      1) Could they decode attempted finger movements from these data (building on this group's prior work decoding a variety of movements, including arm movements, from PPC)?

      2) Is there evidence that the encoding scheme for these movements is similar to that of able-bodied individuals, which would argue that even after paralysis, this area is not reorganized and that the motor representations remain more or less stable after the injury?

      3) Related to #2: is there beneficial remapping, such that neural correlates of attempted movements change to improve BCI performance over time?

      4) Can looking at the interrelationship between different fingers' population firing rate patterns (one aspect of the encoding scheme) indicate whether the representation structure is similar to the statistics of natural finger use, a somatotopic organization (how close the fingers are to each other), or be uniformly different from one another (which would be advantageous for the BCI and connects to question #3)? Furthermore, does the best fit amongst these choices to the data change over the course of a movement, indicating a time-varying neural encoding structure or multiple overlapping processes?

      The study is well-conducted and uses sound analysis methods, and is able to contribute some new knowledge related to all of the above questions. These are rare and precious data, given the relatively few people implanted with multielectrode arrays like the Utah arrays used in this study. Even more so when considering that to this reviewer's knowledge, no other group is recording from PPC, and this manuscript thus is the first look at the attempted finger moving encoding scheme in this part of human cortex .

      An important caveat is that the representational similarity analysis (RDA) method and resulting representational dissimilarity matrix (RDM) that is the workhorse analysis/metric throughout the study is capturing a fairly specific question: which pairs of finger movements' neural correlates are more/less similar, and how does that pattern across the pairings compare to other datasets. There are other questions that one could ask with these data (and perhaps this group will in subsequent studies), which will provide additional information about the encoding; for example, how well does the population activity correlate with the kinematics, kinetics, and predicted sensory feedback that would accompany such movements in an able-bodied person?

      What this study shows is that the RDMs from these PPC Utah array data are most similar to motor cortical RDMs based on a prior fMRI study. It's innovative to compare effectors' representational similarity across different recording modalities, but this apparent similarity should be interpreted in light of several limitations: 1) the vastly different spatial scales (voxels spanning cm that average activity of millions of neurons each versus a few mm of cortex with sparse sampling of individual neurons, 2) the vastly different temporal scales (firing rates versus blood flow), 3) that dramatically different encoding schemes and dynamics could still result in the same RDMs. As currently written, the study does not adequately caveat the relatively superficial and narrow similarity being made between these data and the prior Ejaz et al (2015) sensorimotor cortex fMRI results before except for (some) exposition in the Discussion.

      Relatedly, the study would benefit from additional explanation for why the comparison is being made to able-bodied fMRI data, rather than similar intracortical neural recordings made in homologous areas of non-human primates (NHPs), which have been traditionally used as an animal model for vision-guided forelimb reaching. This group has an illustrious history of such macaque studies, which makes this omission more surprising.

      A second area in which the manuscript in its current form could better set the context for its reader is in how it introduces their motivating question of "do paralyzed BCI users need to learn a fundamentally new skillset, or can they leverage their pre-injury motor repertoire". Until the Discussion, there is almost no mention of the many previous human BCI studies where high performance movement decoding was possible based on asking participants to attempt to make arm or hand movements (to just list a small number of the many such studies: Hochberg et al 2006 and 2012, Collinger et al 2013, Gilja et al 2015, Bouton et al 2016, Ajiboye*, Willett* et al 2017; Brandman et al 2018; Willett et al 2020; Flesher et al 2021). This is important; while most of these past studies examined motor (and somatosensory) cortex and not PPC (though this group's prior Aflalo*, Kellis* et al 2015 study did!), they all did show that motor representations remain at least distinct enough between movements to allow for decoding; were qualitatively similar to the able-bodied animal studies upon which that body of work was build; and could be readily engaged by the user just by attempting/imagining a movement. Thus, there was a very strong expectation going into this present study that the result would be that there would be a resemblance to able-bodied motor representational similarity. While explicitly making this connection is a meaningful contribution to the literature by the present study (and so is comparing it to different areas' representational similarity), care should be taken not to overstate the novelty of retained motor encoding schemes in people with paralysis, given the extensive prior work.

      The final analyses in the manuscript are particularly interesting: they examine the representational structure as a function of a short sliding analysis window, which indicates that there is a more motoric representational structure at the start of the movement, followed by a more somatotopic structure. These analyses are a welcome expansion of the study scope to include the population dynamics, and provides clues as to the role of this activity / the computations this area is involved in throughout movement (e.g., the authors speculate the initial activity is an efference copy from motor cortex, and the later activity is a sensory-consequence model).

      An interesting result in this study is that the participant did not improve performance at the task (and that the neural representations of each finger did not change to become more separable by the decoder). This was despite ample room for improvement (the performance was below 90% accuracy across 5 possible choices), at least not over 4,016 trials. The authors provide several possible explanations for this in the Discussion. Another possibility is that the nature of the task impeded learning because feedback was delayed until the end of the 1.5 second attempted movement period (at which time the participant was presented with text reporting which finger's movement was decoded). This is a very different discrete-and-delayed paradigm from the continuous control used in prior NHP BCI studies that showed motor learning (e.g., Sadtler et al 2014 and follow-ups; Vyas et al 2018 and follow-up; Ganguly & Carmena 2009 and follow-ups). It is possible that having continuous visual feedback about the BCI effector is more similar to the natural motor system (where there is consistent visual, as well as proprioceptive and somatosensory feedback about movements), and thus better engages motor adaptation/learning mechanisms.

      Overall the study contributes to the state of knowledge about human PPC cortex and its neurophysiology even years after injury when a person attempts movements. The methods are sound, but are unlikely (in this reviewer's view) to be widely adopted by the community. Two specific contributions of this study are 1) that it provides an additional data point that motor representations are stable after injury, lowering the risk of BCI strategies based on PPC recording; and 2) that it starts the conversation about how to make deeper comparisons between able-bodied neural dynamics and those of people unable to make overt movements.

    1. The Nation

      "From March 2020 to December 2021, 6,506 hate incidents against Asian American and Pacific Islander women were reported to Stop AAPI Hate; the actual number is likely far greater. " - Reading the article from the nation was very very sad to see that Asian Americans aren’t being respected and treated even as human beings that people think that hate crimes against them in such a negative aspect as rape is just so utterly disgusting and even in the current times were living in with Covid to have a president publicly announce that it is a Chinese disease allowed more people to blame Asian Americans for bringing Covid here which is completely 110% false just as quick as word of mouth can spread over something extremely negative they think that it’s OK to physically harm them because they know that they’re able to overpower them it’s absolutely disgusting and inhumane

    1. The Animal Addendum, for example, states that any single violation of the various rules as stated in the Animal Addendum or a single complaint by a neighbor can, at the sole discretion of the property manager, result in a written notice which will require a tenant to “immediately and permanently” remove the animal from the premises (61). Particularly disturbing is that the Animal Addendum allows a landlord to physically remove a pet when the tenant is not home following any rule violation or if a tenant allows their pet to “urinate or defecate where it is not allowed” (61).

      This seems highly unfair. It's an easy way to punish someone without any real proof (just a neighbor complaint or a dog pees unexpectedly).

    1. Police violence against black women is very real. The level of violence that black women face is such that it's not surprising that some of them do not survive their encounters with police. Black girls as young as seven, great grandmothers as old as 95 have been killed by the police. They've been killed in their living rooms, in their bedrooms. They've been killed in their cars. They've been killed on the street. They've been killed in front of their parents and they've been killed in front of their children. They have been shot to death. They have been stomped to death. They have been suffocated to death. They have been manhandled to death. They have been tasered to death. They've been killed when they've called for help. They've been killed when they were alone, and they've been killed when they were with others. They've been killed shopping while black, driving while black, having a mental disability while black, having a domestic disturbance while black. They've even been killed being homeless while black. They've been killed talking on the cell phone, laughing with friends, sitting in a car reported as stolen and making a U-turn in front of the White House with an infant strapped in the backseat of the car. Why don't we know these stories? Why is it that their lost lives don't generate the same amount of media attention and communal outcry as the lost lives of their fallen brothers? It's time for a change.

      I do agree with her about how we do not know about these things because importance is not given to black people and much less importance is given when it is a woman.

      ** side note: I just figured out how to use the annotations properly hence why the rest of my annotations are on the "Reading" page of our class's site.

    1. So how does the black hole information problem get resolved? I know it is something involving the theory on the boundary. Polchinski: There are two questions. The first question that we always ask ourselves at conferences is what happens to the information? It’s lost; it comes out, it remains in a remnant. Basically those are the three choices. In any AdS/CFT duality, a black hole is just the dual description of a gas of hot gluons, and this is very satisfying because the black hole thermodynamics is telling us that a black hole behaves like a thermodynamic object. In this duality, it is literally dually described as a gas of hot particles. And the Hawking evaporation is just ordinary evaporation in this case, and so the information just comes out with the evaporating gluons on it. Now, there is still a puzzle, which is the following
      • INFO PROBLEM solution?
      • what happens to the information? It’s lost; it comes out, it remains in a remnant. Basically those are the three choices
      • DUALITY: BH==HOT GLUONS GAS
      • so the information just comes out with the evaporating gluons on it.
    1. Linde: At first, not read Guth's paper. I first heard about Guth's paper. I didn't have it before me, but since I had discussed this point with Rubakov and his collaborators, who had suggested a similar idea, then everything was quite clear for me. Lev Okun' from ITEP called me and asked have I heard anything about Guth's paper for explaining the flatness of the universe. I told him that "No, I haven't heard about it, but I know what it's about." [Linde laughs.] And I told him how it works without seeing it. So, it doesn't matter. It's just how the things work.
      • LINDE: didn't read GUTH
      • LEV OKUN called
  3. Jul 2022
    1. But sometimes these books leave the impression that heartfelt desire and hard work alone will lead to improved performance—“Just keep working at it, and you’ll get there”—and this is wrong. The right sort of practice carried out over a sufficient period of time leads to improvement. Nothing else.

      practice doesn't make perfect, it's the right kind of practice

    1. This doesn't mean that popular sources are less valuable or necessarily contain low-quality or false information; it just depends!

      This is why it’s good to cross check your sources because you never know

    1. the biggest problems with computing is in a sense we have too many smart people it attracts 00:36:44 cleverness and you can do clever hacks but the clever hacks don't scale well and it's very hard to build a halflife into software it just stays around forever and so what what's actually 00:37:00 happening is kind of something like building a large garbage dump that makes it the odor of which makes it very hard to think about other things so if we 00:37:10 take news or normal we can think we can solve problems avoid obstacles beg every once in while we have an outlaw thought

      !- biggest problem : with computing - it attracts cleverness - clever hacks don't scale well - hard to build half life into software - stays around forever?

    1. Privacy <iframe src="https://www.googletagmanager.com/ns.html?id=GTM-57GHMWX" height="0" width="0" style="display:none;visibility:hidden"></iframe> Skip to content window.googletag = window.googletag || { cmd: [] }; window.googletag.cmd.push(function() { googletag.display("nav-ad"); }); TechRepublic Search Close Search Top Products Lists Developer 5G Security Cloud Artificial Intelligence Tech & Work Mobility Big Data Innovation Cheat Sheets TechRepublic Academy CES Toggle TechRepublic mobile menu More TechRepublic Premium Top Products Lists Developer 5G Security Cloud Artificial Intelligence Tech & Work Mobility Big Data Innovation Cheat Sheets TechRepublic Academy CES See All Topics Sponsored Newsletters Forums Resource Library TechRepublic Premium Join / Sign In Account Information TechRepublic close modal Join or sign in Register for your free TechRepublic membership or if you are already a member, sign in using your preferred method below. Use Your Email Use FacebookUse Linkedin Join or sign in We recently updated our Terms and Conditions for TechRepublic Premium. By clicking continue, you agree to these updated terms. Welcome back! Invalid email/username and password combination supplied. Reset password An email has been sent to you with instructions on how to reset your password. Back to TechRepublic Welcome to TechRepublic! Country United States Afghanistan Aland Islands Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua And Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Caribbean Netherlands Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, DROC Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Curazao Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic East Timor Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guernsey Guinea Guinea-Bissau Guyana Haiti Heard And Mc Donald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran (Islamic Republic Of) Iraq Ireland Isle of Man Israel Italy Jamaica Japan Jersey Jordan Kazakhstan Kenya Kiribati Kuwait Korea, Republic Of Kosovo Kyrgyzstan Lao People's Democratic Republic Latvia Korea, Democratic People's Republic of Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macau Macedonia Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic Of Monaco Mongolia Montenegro Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russia Rwanda S. Georgia And S. Sandwich Isles Saint Barthelemy Saint Kitts And Nevis Saint Lucia Saint Martin Saint Vincent And The Grenadines Samoa San Marino Sao Tome And Principe Saudi Arabia Senegal Serbia Serbia and Montenegro Seychelles Sierra Leone Singapore Sint Maarten Slovakia Slovenia Solomon Islands Somalia South Africa South Sudan Spain Sri Lanka St. Helena St. Pierre And Miquelon Sudan Suriname Svalbard And Jan Mayen Islands Swaziland Sweden Switzerland Syrian Arab Republic Taiwan Tajikistan Tanzania, United Republic Of Thailand Togo Tokelau Tonga Trinidad And Tobago Tunisia Turkey Turkmenistan Turks And Caicos Islands Tuvalu U.S. Minor Outlying Islands Uganda Ukraine United Arab Emirates United Kingdom Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands (British) Virgin Islands (U.S.) Wallis And Futuna Islands Western Sahara Yemen Yugoslavia Zambia Zimbabwe By registering, you agree to the Terms of Use and acknowledge the data practices outlined in the Privacy Policy. You will also receive a complimentary subscription to TechRepublic's News and Special Offers newsletter and the Top Story of the Day newsletter. You may unsubscribe from these newsletters at any time. All fields are required. Username must be unique. Password must be a minimum of 6 characters and have any 3 of the 4 items: a number (0 through 9), a special character (such as !, $, #, %), an uppercase character (A through Z) or a lowercase (a through z) character (no spaces). Loading Account Information TechRepublic close modal Image: Chaosamran_Studio/Adobe Stock dataLayer.push({'post_author': "Franklin Okeke"}); window.googletag = window.googletag || { cmd: [] }; window.googletag.cmd.push(function() { googletag.display("leader-plus-top"); }); The 12 best IDEs for programming Account Information TechRepublic close modal Share with Your Friends The 12 best IDEs for programming Check out this article I found on TechRepublic. Your email has been sent by Franklin Okeke in Developer on July 7, 2022, 7:48 AM PDT The 12 best IDEs for programming IDEs are essential tools for software development. Here is a list of the top IDEs for programming in 2022. Image: Chaosamran_Studio/Adobe Stock Software developers have battled with text editors and command-line tools that offered little or nothing in the automation, debugging and speedy execution of codes. However, the software development landscape is rapidly changing, and this includes programming tools. To accommodate the evolution in software development, software engineers came up with more sophisticated tools known as integrated development environments. To keep up with the fast pace of emerging technologies, there has been an increasing demand for the best IDEs among software development companies. We will explore the 12 best IDEs that offer valuable solutions to programmers in 2022. Jump to: What is an IDE? The importance of IDEs in software programming Standard features of an IDE Classifications of IDEs Best IDEs for programmers Factors to consider when picking an IDE What is an IDE? IDEs are software development tools developers use to simplify their programming and design experience. IDEs come with an integrated user interface that combines everything a developer needs to write codes conveniently. The best IDEs are built with features that allow developers to write and edit code with a code editor, debug code with a debugger, compile code with a code compiler and automate some software development tasks. SEE: Hiring kit: Back-end Developer (TechRepublic Premium) The best IDEs come with class browsers to examine and reference properties, object browsers to investigate objects and class hierarchy diagrams to see object-oriented programming code. IDEs are designed to increase software developer productivity by incorporating close-knit components that create a perfect playground where they can write, test and do whatever they want with their code. Why are IDEs important in software programming? IDEs provide a lot of support to software developers, which was not available in the old text editors. The best IDEs around do not need to be manually configured and integrated as part of the setup process. Instead, they enable developers to begin developing new apps on the go. Must-read developer coverage The 12 best IDEs for programming Best DevOps Tools & Solutions 2022 CI/CD platforms: How to choose the right system for your business Hiring kit: Python developer Additionally, since every feature a programmer needs is available in the same development environment, developers don’t have to spend hours learning how to use each separately. This can be extremely helpful when bringing on new developers, who may rely on an IDE to familiarize themselves with a team’s standard tools and procedures. In reality, most IDE capabilities, such as intelligent code completion and automatic code creation, are designed to save time by eliminating the need to write out entire character sequences. Other standard IDE features are designed to facilitate workflow organization and problem-solving for developers. IDEs parse code as it is written, allowing for real-time detection of human-related errors. As such, developers can carry out operations without switching between programs because the needed utilities are represented by a single graphical user interface. Most IDEs also have a syntax highlighting feature, which uses visual clues to distinguish between grammar in the text editor. Class and object browsers, as well as class hierarchy diagrams for certain languages, are additional features that some IDEs offer. All these features help the modern programmer to turn out software development projects fast. For a programming project requiring software-specific features, it’s possible to manually integrate these features or utilities with Vim or Emacs. The benefit here is that software developers can easily have their custom-made IDEs. However, for enterprise uses, the above process might take time and impact standardization negatively. Most enterprises encourage their development teams to go for pre-configured IDEs that suit their job demands. Other benefits of IDEs An IDE serves as a centralized environment for the needs of most software developers, such as version control systems, Platform-as-a-Service and debugging tools. An IDE improves workflow due to its fast code completion capabilities. An IDE automates error-checking on the fly to ensure top-quality code. An IDE has refactoring capabilities that allow programmers to make comprehensive and renaming changes. An IDE ensure a seamless development cycle. An IDE facilitates developer efficiency and satisfaction. Standard features of an IDE Text editor Almost all IDEs will offer a text editor made specifically for writing and modifying source code. While some tools may allow users to drag and drop front-end elements visually, the majority offers a straightforward user interface that emphasizes language-specific syntax. Debugger Debugging tools help developers identify and correct source code mistakes. Before the application is published, programmers and software engineers can test the various code parts and find issues. Compiler The compiler feature in IDE assists programmers in translating programming languages into machine-readable languages such as binary code. The compiler also helps to ensure the accuracy of these machine languages by analyzing and optimizing them. Code completion This feature helps developers to intelligently and automatically complete common code components. This process helps developers to save time and reduces bugs that come from typos. Programming language support Although some IDEs are pre-configured to support one programming language, others offer multi-programming language support. Most times, in choosing an IDE, users have to figure out which programming languages they will be coding in and pick an IDE accordingly. Integrations and plugins Integration capability is one feature that makes an IDE stand out. IDEs support the integration of other development tools through plugins to enhance productivity. Classifications of IDEs IDEs come in different types and according to the programming languages they support. While some support one language, others can support more than one. Multi-language IDE Multi-language IDEs are IDE types that support multiple programming languages. This IDE type is best suited for beginner programmers still at the exploration stage. An example of this type of IDE is the Visual Studio IDE. It’s popular for its incredible supporting features. For example, users can easily code in a new programming language by adding the language plugin. Mobile development IDE As the market for mobile app development grows, numerous programming tools are becoming available to help software developers build efficient mobile apps. Mobile development IDEs for the Android and iOS platforms include Android Studio and Xcode. Web/cloud-based IDE If an enterprise supports a cloud-based development environment, it may need to adopt a cloud-based IDE. One of the advantages of using this type of IDE is that it can run heavy projects without occupying any computational resources in a local system. Again, this type of IDE is always platform-independent, making it easy to connect to many cloud development providers. Specific-language IDE This IDE type is a typical opposite of the multiple-language IDE. They are specifically built to support developers who work on only one programming language. Some of these IDEs include Jcreator for Java, Idle for Python and CodeLite for C++. Best IDEs for programmers in 2022 Visual Studio Microsoft Visual Studios The Visual Studio IDE is a Microsoft-powered integrated development interface developed to help software developers with web developments. The IDE uses artificial intelligence features to learn from the edit programmer’s make to their codes, making it easy for it to complete lines of code automatically. One of the top features many developers have come to like about Visual Studio is that it aids collaborative development between teams in live development. This feature is very crucial, especially during the debugging process. The IDE also allows users to share servers, comments and terminals. Visual Studio has the capability to support mobile app, web and game development. It also supports Python language, Node.js, ASP.NET and Azure. With Visual Studio, developers can easily create a development environment in the cloud. SEE: Hiring kit: Python developer (TechRepublic Premium) With its multi-language support, Visual Studio has features that integrate flawlessly with Django and Flask frameworks. It can be used as an IDE for Python on the Mac, Windows and Linux operating systems. IntelliJ IDEA IntelliJ IDEA IntelliJ Idea has been around for years and has served as one of the best IDEs for Java programming. The IntelliJ Idea UI is designed in a sleek way that makes coding appealing to many Java developers. With this IDE, code can get indexed, providing relevant suggestions to help complete code lines. It also takes this suggestive coding further by automating several tasks that may be repetitive. Apart from supporting web, enterprise, and mobile Java programming, it is also a good option for JavaScript, SQL and JPQL programming Xcode Xcode Xcode might be the best IDE tool for Apple product developers. The tool supports iOS app development with its numerous iOS tools. The IDE supports programming languages such as Swift, C++ and Object-C. With XCode, developers can easily manage their software development workflow with quality code suggestions from the interface. Android Studio Android Studio The Android Studio is one of the best IDEs for Android app development. This IDE supports Kotlin and Java programming languages. Some important features users can get from the Android Studio are push alerts, camera integrations and other mobile technology features. Developers can also create variants and different APKs with the help of this flexible IDE, which also offers extended template support for Google Services. AWS Cloud9 IDE AWS Cloud9 The AWS Cloud9 IDE is packed with a terminal, a debugger and a code editor, and it supports popular programming languages such as Python and PHP. With Cloud9 IDE, software developers can work on their projects from almost anywhere in the globe as long as they have a computer that is connected to the internet, because it is cloud-based. Developers may create serverless applications using Cloud9 and easily collaborate with different teams in different development environments. Eclipse Eclipse Eclipse is one of the most popular IDEs. It’s a cross-platform tool with a powerful user interface that supports drag and drop. The IDE is also packed with some important features such as static analysis tools, debugging and profiling capabilities. Eclipse is enterprise development-friendly and it allows developers to work on scalable and open-source software development easily. Although Eclipse is best associated with Java, it also supports multiple programming languages. In addition, users can add their preferred plugins to the IDE to support software development projects. Zend Studio Zend Studio Zend Studio is a leading PHP IDE designed to support PHP developers in both web and mobile development. The tool features advanced debugging capabilities and a code editor with a large community to support its users. There is every possibility that PHP developers will cling to the Zend IDE for a long time as it has consistently proven to be a reliable option for server-side programming. Furthermore, programmers can take advantage of Zend Studio’s plugin integrations to maximize PHP applications’ deployment on any server. PhpStorm PhpStorm PhpStorm is another choice to consider if users use PHP for web development. Although it focuses on the PHP programming language, front-end languages like HTML 5, CSS, Sass, JavaScript and others are also supported. It also supports popular website-building tools, including WordPress, Drupal and Laravek. It offers simple navigation, code completion, testing, debugging and refactoring capabilities. PhpStorm comes with built-in developer tools that help users perform routine tasks directly from the IDE. Some of these built-in tools serve as a version control system, remote deployment, composer and Docker. Arduino IDE Arduino Arduino is another top open source, cross-platform IDE that helps developers to write clean code with an option to share with other developers. This IDE offers both online and local code editing environments. Developers who want to carry out sophisticated tasks without putting a strain on computer resources love it for how simple it is to utilize. The Arduino IDE includes current support for the newest Arduino boards. Additionally, it offers a more contemporary editor and a dynamic UI with autocompletion, code navigation and even live debugger features. NetBeans NetBeans You can’t have a list of the best IDE for web development without including NetBeans. It’s among one of the most popular options for the best IDE because it’s a no-nonsense software for Java, JavaScript, PHP, HTML 5, CSS and more. It also helps users create bug-free codes by highlighting code syntactically and semantically. It also has a lot of powerful refactoring tools while being open source. RubyMine RubyMine Although RubyMine primarily supports the Ruby, it also works well with JavaScript, CSS, Less, Sass and other programming languages. The IDE has some crucial automation features such as code completion, syntax and error-highlighting, an advanced search option for any class and symbol. WebStorm WebStorm The WebStorm IDE is excellent for programming in JavaScript. The IDE features live error detection, code autocompletion, a debugger and unit testing. It also comes with some great integrations to aid web development. Some of these integrations are GitHub, Git and Mercurial. Factors to consider when picking an IDE Programming language support An IDE should be able to support the programming language used in users’ software development projects. Customizable text editors Some IDEs offer the ability to edit the graphical user interface. Check if the preferred IDE has this feature, because it can increase productivity. Unit testing Check if the IDE can add mock objects to some sections of the code. This feature helps test code straight away without completing all the sections. Source code library Users may also wish to consider if the IDE has resources such as scripts and source code. Error diagnostics and reports For new programmers, sometimes it’s good to have an IDE that can automatically detect errors in code. Have this factor in mind if users will need this feature. Code completion Some IDEs are designed to intelligently complete lines of code, especially when it comes to tag closing. If developers want to save some coding time from tag closing, check for IDEs that offer this option. Integrations and plugins Do not forget to check the integration features before making a choice. Code search Some IDEs offer the code search option to help search for elements quickly in code. Look for IDEs that support this productivity feature. Hierarchy diagrams If users often work on larger projects with numerous files and scripts that all interact in a certain way, look for IDEs that can organize and present these scripts in a hierarchy. This feature can help programmers observe the order of file execution and the relationships between different files and scripts by displaying a hierarchy diagram. Model-driven development Some IDEs help turn models into code. If users love creating models for the IDE, consider this factor before choosing an IDE. Programming language courses No matter what language you write or want to learn, TechRepublic Academy has a number of different programming courses to help you level-up your skills. Start with these: Python JavaScript Ruby on Rails All programming languages courses Developer Essentials Newsletter From the hottest programming languages to the jobs with the highest salaries, get the developer news and tips you need to know. Delivered Thursdays Sign up today Franklin Okeke Published:  July 7, 2022, 7:48 AM PDT Modified:  July 29, 2022, 10:40 PM PDT See more Developer Also See How to become a developer: A cheat sheet (TechRepublic) Python programming language: This training will jump-start your coding career (TechRepublic Academy) 8 must-have tools for developers on Linux (TechRepublic Premium) Programming languages and developer career resources (TechRepublic on Flipboard) window.googletag = window.googletag || { cmd: [] }; window.googletag.cmd.push(function() { googletag.display("mpu-plus-top"); }); (function() { if(isTRgamMobile && !isTRgamTablet && gam_is_topic) { var mpuPlusTopInsert = document.querySelectorAll(".list-featured .related-resources")[0]; var mpuPlusTopContent = document.getElementById('mpu-plus-top'); mpuPlusTopInsert.parentNode.insertBefore(mpuPlusTopContent, mpuPlusTopInsert); } })(); WHITE PAPERS, WEBCASTS, AND DOWNLOADS EaseUS Data Recovery Wizard Downloads from EaseUS Software Give It A Try! EaseUS Todo PCTrans Downloads from EaseUS Software Get It Today EaseUS Disk Copy for Disk Upgrade Downloads from EaseUS Software Download Now EaseUS RecExperts Downloads from EaseUS Software Download Now EaseUS Todo Backup Downloads from EaseUS Software Download Now window.googletag = window.googletag || { cmd: [] }; window.googletag.cmd.push(function() { googletag.display("native-rr-article"); }); window.googletag = window.googletag || { cmd: [] }; window.googletag.cmd.push(function() { googletag.display("mpu-middle"); }); (function() { if(isTRgamMobile && !isTRgamTablet && gam_is_topic) { var mpuMiddleInsert = document.querySelectorAll(".list-featured .related-resources")[1]; var mpuMiddleContent = document.getElementById('mpu-middle'); mpuMiddleInsert.parentNode.insertBefore(mpuMiddleContent, mpuMiddleInsert); } })(); window.googletag = window.googletag || { cmd: [] }; window.googletag.cmd.push(function() { googletag.display("mpu-bottom"); }); Account Information TechRepublic close modal Share with Your Friends The 12 best IDEs for programming Check out this article I found on TechRepublic. Your email has been sent Share: The 12 best IDEs for programming By Franklin Okeke Franklin Okeke is a freelance content writer with a strong focus on cybersecurity, search engine optimization, and software development content. Account Information TechRepublic close modal Contact Franklin Okeke Your message has been sent | See all of Franklin's content window.googletag = window.googletag || { cmd: [] }; window.googletag.cmd.push(function() { googletag.display("native-boap"); }); var boapAfterContent = document.querySelector(".article .categories"); var boapContent = document.getElementById('native-boap'); boapAfterContent.parentNode.insertBefore(boapContent, boapAfterContent); boapContent.style.height = 'auto'; boapContent.style.width = '100%'; Developer Software Top Products Editor's Picks Image: Rawpixel/Adobe Stock TechRepublic Premium TechRepublic Premium editorial calendar: IT policies, checklists, toolkits, and research for download TechRepublic Premium content helps you solve your toughest IT issues and jump-start your career or next project. TechRepublic Staff Published:  May 30, 2022, 9:30 AM PDT Modified:  May 31, 2022, 5:32 AM PDT Read More See more TechRepublic Premium Image: Jacob Lund/Adobe Stock Innovation Best tech products and most innovative AI/ML companies of 2022 TechRepublic contributing writers ranked the best tech in multiple categories, including VPNs, password managers, and headsets, as well as AI/ML companies. Connor R. Smith Published:  July 18, 2022, 7:01 AM PDT Modified:  July 18, 2022, 10:33 AM PDT Read More See more Innovation Image: Olga/Adobe Stock Developer Meta launches entry-level developer courses through Coursera Meta's new front-end, back-end, mobile and database development courses prepare entry-level professionals for development careers in less than eight months. Brenna Miles Published:  July 19, 2022, 7:48 AM PDT Modified:  July 21, 2022, 7:48 AM PDT Read More See more Developer Image: Nuthawut/Adobe Stock Software Best project management software and tools 2022 With so many project management software options to choose from, it can seem daunting to find the right one for your projects or company. We’ve narrowed them down to these nine. Sam Ingalls Published:  July 19, 2022, 12:25 PM PDT Modified:  July 29, 2022, 1:58 PM PDT Read More See more Software Apple announced iOS 16 on June 6, 2022 during the WWDC keynote. Image: Apple Mobility iOS 16 cheat sheet: Complete guide for 2022 Learn about the new features available with iOS 16, and how to download and install the latest version of Apple’s mobile operating system. Cory Bohon Published:  July 14, 2022, 7:00 AM PDT Modified:  July 29, 2022, 7:37 AM PDT Read More See more Mobility Image: Chaosamran_Studio/Adobe Stock Developer The 12 best IDEs for programming IDEs are essential tools for software development. Here is a list of the top IDEs for programming in 2022. Franklin Okeke Published:  July 7, 2022, 7:48 AM PDT Modified:  July 29, 2022, 10:40 PM PDT Read More See more Developer window.googletag = window.googletag || { cmd: [] }; window.googletag.cmd.push(function() { googletag.display("leader-bottom"); }); TechRepublic Premium TechRepublic Premium Industrial Internet of Things: Software comparison tool IIoT software assists manufacturers and other industrial operations with configuring, managing and monitoring connected devices. A good IoT solution requires capabilities ranging from designing and delivering connected products to collecting and analyzing system data once in the field. Each IIoT use case has its own diverse set of requirements, but there are key capabilities and ... Downloads Published:  May 26, 2022, 5:00 PM PDT Modified:  May 28, 2022, 8:00 AM PDT Read More See more TechRepublic Premium TechRepublic Premium How to recruit and hire an Operations Research Analyst Recruiting an Operations Research Analyst with the right combination of technical expertise and experience will require a comprehensive screening process. This Hiring Kit provides an adjustable framework your business can use to find, recruit and ultimately hire the right person for the job.This hiring kit from TechRepublic Premium includes a job description, sample interview questions ... Downloads Published:  May 19, 2022, 5:00 PM PDT Modified:  May 21, 2022, 12:00 PM PDT Read More See more TechRepublic Premium TechRepublic Premium Quick glossary: Industrial Internet of Things The digital transformation required by implementing the industrial Internet of Things (IIoT) is a radical change from business as usual. This quick glossary of 30 terms and concepts relating to IIoT will help you get a handle on what IIoT is and what it can do for your business.. From the glossary’s introduction: While the ... Downloads Published:  May 19, 2022, 5:00 PM PDT Modified:  May 21, 2022, 12:00 PM PDT Read More See more TechRepublic Premium TechRepublic Premium Software Procurement Policy Procuring software packages for an organization is a complicated process that involves more than just technological knowledge. There are financial and support aspects to consider, proof of concepts to evaluate and vendor negotiations to handle. Navigating through the details of an RFP alone can be challenging, so use TechRepublic Premium’s Software Procurement Policy to establish ... Published:  April 14, 2022, 5:00 PM PDT Modified:  April 16, 2022, 1:00 PM PDT Read More See more TechRepublic Premium Services About Us Newsletters RSS Feeds Site Map Site Help & Feedback FAQ Advertise Do Not Sell My Information Explore Downloads TechRepublic Forums Meet the Team TechRepublic Academy TechRepublic Premium Resource Library Photos Videos TechRepublic TechRepublic on Twitter TechRepublic on Facebook TechRepublic on LinkedIn TechRepublic on Flipboard © 2022 TechnologyAdvice. All rights reserved. Privacy Policy Terms of Use Property of TechnologyAdvice var REPORT_POST = {"ajax_url":"https:\/\/www.techrepublic.com\/wp-admin\/admin-ajax.php","nonce":"ac0f8f85a3","post_id":"3981951"}; var medusa_ajax = {"url":"https:\/\/www.techrepublic.com\/wp-admin\/admin-ajax.php"}; var load_more_posts = {"ajax_url":"https:\/\/www.techrepublic.com\/wp-admin\/admin-ajax.php","nonce":"e7122233fc"}; var share_email = {"ajax_url":"https:\/\/www.techrepublic.com\/wp-admin\/admin-ajax.php","nonce":"0c76aa4155"}; var email_author = {"ajax_url":"https:\/\/www.techrepublic.com\/wp-admin\/admin-ajax.php","nonce":"f085617501"}; var show_more_forum_posts = {"ajax_url":"https:\/\/www.techrepublic.com\/wp-admin\/admin-ajax.php","nonce":"82a930466d"}; var social_registration = {"ajax_url":"https:\/\/www.techrepublic.com\/wp-admin\/admin-ajax.php","nonce":"9c2163d8c6","site_url":"https:\/\/www.techrepublic.com"}; (function (undefined) {var _localizedStrings={"redirect_overlay_title":"Hold On","redirect_overlay_text":"You are being redirected to another page,<br>it may take a few seconds."};var _targetWindow="prefer-popup";var _redirectOverlay="overlay-with-spinner-and-message"; window.NSLPopup = function (url, title, w, h) { var userAgent = navigator.userAgent, mobile = function () { return /\b(iPhone|iP[ao]d)/.test(userAgent) || /\b(iP[ao]d)/.test(userAgent) || /Android/i.test(userAgent) || /Mobile/i.test(userAgent); }, screenX = window.screenX !== undefined ? window.screenX : window.screenLeft, screenY = window.screenY !== undefined ? window.screenY : window.screenTop, outerWidth = window.outerWidth !== undefined ? window.outerWidth : document.documentElement.clientWidth, outerHeight = window.outerHeight !== undefined ? window.outerHeight : document.documentElement.clientHeight - 22, targetWidth = mobile() ? null : w, targetHeight = mobile() ? null : h, V = screenX < 0 ? window.screen.width + screenX : screenX, left = parseInt(V + (outerWidth - targetWidth) / 2, 10), right = parseInt(screenY + (outerHeight - targetHeight) / 2.5, 10), features = []; if (targetWidth !== null) { features.push('width=' + targetWidth); } if (targetHeight !== null) { features.push('height=' + targetHeight); } features.push('left=' + left); features.push('top=' + right); features.push('scrollbars=1'); var newWindow = window.open(url, title, features.join(',')); if (window.focus) { newWindow.focus(); } return newWindow; }; var isWebView = null; function checkWebView() { if (isWebView === null) { function _detectOS(ua) { if (/Android/.test(ua)) { return "Android"; } else if (/iPhone|iPad|iPod/.test(ua)) { return "iOS"; } else if (/Windows/.test(ua)) { return "Windows"; } else if (/Mac OS X/.test(ua)) { return "Mac"; } else if (/CrOS/.test(ua)) { return "Chrome OS"; } else if (/Firefox/.test(ua)) { return "Firefox OS"; } return ""; } function _detectBrowser(ua) { var android = /Android/.test(ua); if (/Opera Mini/.test(ua) || / OPR/.test(ua) || / OPT/.test(ua)) { return "Opera"; } else if (/CriOS/.test(ua)) { return "Chrome for iOS"; } else if (/Edge/.test(ua)) { return "Edge"; } else if (android && /Silk\//.test(ua)) { return "Silk"; } else if (/Chrome/.test(ua)) { return "Chrome"; } else if (/Firefox/.test(ua)) { return "Firefox"; } else if (android) { return "AOSP"; } else if (/MSIE|Trident/.test(ua)) { return "IE"; } else if (/Safari\//.test(ua)) { return "Safari"; } else if (/AppleWebKit/.test(ua)) { return "WebKit"; } return ""; } function _detectBrowserVersion(ua, browser) { if (browser === "Opera") { return /Opera Mini/.test(ua) ? _getVersion(ua, "Opera Mini/") : / OPR/.test(ua) ? _getVersion(ua, " OPR/") : _getVersion(ua, " OPT/"); } else if (browser === "Chrome for iOS") { return _getVersion(ua, "CriOS/"); } else if (browser === "Edge") { return _getVersion(ua, "Edge/"); } else if (browser === "Chrome") { return _getVersion(ua, "Chrome/"); } else if (browser === "Firefox") { return _getVersion(ua, "Firefox/"); } else if (browser === "Silk") { return _getVersion(ua, "Silk/"); } else if (browser === "AOSP") { return _getVersion(ua, "Version/"); } else if (browser === "IE") { return /IEMobile/.test(ua) ? _getVersion(ua, "IEMobile/") : /MSIE/.test(ua) ? _getVersion(ua, "MSIE ") : _getVersion(ua, "rv:"); } else if (browser === "Safari") { return _getVersion(ua, "Version/"); } else if (browser === "WebKit") { return _getVersion(ua, "WebKit/"); } return "0.0.0"; } function _getVersion(ua, token) { try { return _normalizeSemverString(ua.split(token)[1].trim().split(/[^\w\.]/)[0]); } catch (o_O) { } return "0.0.0"; } function _normalizeSemverString(version) { var ary = version.split(/[\._]/); return (parseInt(ary[0], 10) || 0) + "." + (parseInt(ary[1], 10) || 0) + "." + (parseInt(ary[2], 10) || 0); } function _isWebView(ua, os, browser, version, options) { switch (os + browser) { case "iOSSafari": return false; case "iOSWebKit": return _isWebView_iOS(options); case "AndroidAOSP": return false; case "AndroidChrome": return parseFloat(version) >= 42 ? /; wv/.test(ua) : /\d{2}\.0\.0/.test(version) ? true : _isWebView_Android(options); } return false; } function _isWebView_iOS(options) { var document = (window["document"] || {}); if ("WEB_VIEW" in options) { return options["WEB_VIEW"]; } return !("fullscreenEnabled" in document || "webkitFullscreenEnabled" in document || false); } function _isWebView_Android(options) { if ("WEB_VIEW" in options) { return options["WEB_VIEW"]; } return !("requestFileSystem" in window || "webkitRequestFileSystem" in window || false); } var options = {}; var nav = window.navigator || {}; var ua = nav.userAgent || ""; var os = _detectOS(ua); var browser = _detectBrowser(ua); var browserVersion = _detectBrowserVersion(ua, browser); isWebView = _isWebView(ua, os, browser, browserVersion, options); } return isWebView; } function isAllowedWebViewForUserAgent(provider) { var googleAllowedWebViews = [ 'Instagram', 'FBAV', 'FBAN', 'Line', ], facebookAllowedWebViews = [ 'Instagram', 'FBAV', 'FBAN' ], whitelist = []; switch (provider) { case 'facebook': whitelist = facebookAllowedWebViews; break; case 'google': whitelist = googleAllowedWebViews; break; } var nav = window.navigator || {}; var ua = nav.userAgent || ""; if (whitelist.length && ua.match(new RegExp(whitelist.join('|')))) { return true; } return false; } window._nslDOMReady(function () { window.nslRedirect = function (url) { if (_redirectOverlay) { var overlay = document.createElement('div'); overlay.id = "nsl-redirect-overlay"; var overlayHTML = '', overlayContainer = "<div id='nsl-redirect-overlay-container'>", overlayContainerClose = "</div>", overlaySpinner = "<div id='nsl-redirect-overlay-spinner'></div>", overlayTitle = "<p id='nsl-redirect-overlay-title'>" + _localizedStrings.redirect_overlay_title + "</p>", overlayText = "<p id='nsl-redirect-overlay-text'>" + _localizedStrings.redirect_overlay_text + "</p>"; switch (_redirectOverlay) { case "overlay-only": break; case "overlay-with-spinner": overlayHTML = overlayContainer + overlaySpinner + overlayContainerClose; break; default: overlayHTML = overlayContainer + overlaySpinner + overlayTitle + overlayText + overlayContainerClose; break; } overlay.insertAdjacentHTML("afterbegin", overlayHTML); document.body.appendChild(overlay); } window.location = url; }; var targetWindow = _targetWindow || 'prefer-popup', lastPopup = false; var buttonLinks = document.querySelectorAll(' a[data-plugin="nsl"][data-action="connect"], a[data-plugin="nsl"][data-action="link"]'); buttonLinks.forEach(function (buttonLink) { buttonLink.addEventListener('click', function (e) { if (lastPopup && !lastPopup.closed) { e.preventDefault(); lastPopup.focus(); } else { var href = this.href, success = false; if (href.indexOf('?') !== -1) { href += '&'; } else { href += '?'; } var redirectTo = this.dataset.redirect; if (redirectTo === 'current') { href += 'redirect=' + encodeURIComponent(window.location.href) + '&'; } else if (redirectTo && redirectTo !== '') { href += 'redirect=' + encodeURIComponent(redirectTo) + '&'; } if (targetWindow !== 'prefer-same-window' && checkWebView()) { targetWindow = 'prefer-same-window'; } if (targetWindow === 'prefer-popup') { lastPopup = NSLPopup(href + 'display=popup', 'nsl-social-connect', this.dataset.popupwidth, this.dataset.popupheight); if (lastPopup) { success = true; e.preventDefault(); } } else if (targetWindow === 'prefer-new-tab') { var newTab = window.open(href + 'display=popup', '_blank'); if (newTab) { if (window.focus) { newTab.focus(); } success = true; e.preventDefault(); } } if (!success) { window.location = href; e.preventDefault(); } } }); }); var googleLoginButtons = document.querySelectorAll(' a[data-plugin="nsl"][data-provider="google"]'); if (googleLoginButtons.length && checkWebView() && !isAllowedWebViewForUserAgent('google')) { googleLoginButtons.forEach(function (googleLoginButton) { googleLoginButton.remove(); }); } var facebookLoginButtons = document.querySelectorAll(' a[data-plugin="nsl"][data-provider="facebook"]'); if (facebookLoginButtons.length && checkWebView() && /Android/.test(window.navigator.userAgent) && !isAllowedWebViewForUserAgent('facebook')) { facebookLoginButtons.forEach(function (facebookLoginButton) { facebookLoginButton.remove(); }); } });})();window.advads_admin_bar_items = [{"title":"Datalayer &amp; GAM Core","type":"ad"},{"title":"Before Closing Head Tag","type":"placement"},{"title":"TR | Master Ad Slot Definitions","type":"ad"},{"title":"Master Ad Slot Definitions Placement","type":"placement"},{"title":"TR | Skybox - Render","type":"ad"},{"title":"TR | Leader Plus Top - Render","type":"ad"},{"title":"TR | MPU Plus Top - Render","type":"ad"},{"title":"TR | Native Right Rail - Render","type":"ad"},{"title":"TR | MPU Middle - Render","type":"ad"},{"title":"TR | MPU Bottom - Render","type":"ad"},{"title":"TR | Leader Bottom - Render","type":"ad"}];!function(){window.advanced_ads_ready_queue=window.advanced_ads_ready_queue||[],advanced_ads_ready_queue.push=window.advanced_ads_ready;for(var d=0,a=advanced_ads_ready_queue.length;d<a;d++)advanced_ads_ready(advanced_ads_ready_queue[d])}(); #native-rr-article iframe, #native-main-well iframe, #native-boap iframe, #native-study-guide iframe, #native-in-article iframe { width: 100%; } #native-rr-article > div, #native-big-grid iframe, #native-big-grid > div { width: 100%; } #native-boap iframe { height: 20rem; } @media screen and (min-width: 500px) { #native-boap iframe { height: 20rem; } } @media screen and (min-width: 1100px) { #native-boap iframe { height: 20rem; } } #native-study-guide::before { content: ''; bottom: 0; width: 100%; position: absolute; border-width: 0 0 1px 0; border-style: solid; border-color: #eaeaea; border-color: rgba(0,0,0,0.19); } if( localStorage.getItem('shouldSendLoginEvent') === 'true' ) { localStorage.removeItem('shouldSendLoginEvent'); dataLayer.push({ 'event': 'user_login', 'has_premium': false, 'has_annual_subscription': true }); } (function(){function r(){function t(){var c=f.document,g=!!f.frames[u];if(!g)if(c.body){var l=c.createElement("iframe");l.style.cssText="display:none";l.name=u;c.body.appendChild(l)}else setTimeout(t,5);return!g}function D(){var c=arguments;if(c.length)if("setGdprApplies"===c[0]){if(3<c.length&&2===c[2]&&"boolean"===typeof c[3]&&"function"===typeof c[2])c[2]("set",!0)}else if("ping"===c[0]){var g={gdprApplies:g,cmpLoaded:!1,cmpStatus:"stub"};if("function"===typeof c[2])c[2](g)}else"init"===c[0]&& "object"===typeof c[3]&&(c[3]=Object.assign(c[3],{tag_version:"V2"})),w.push(c);else return w}function q(c){var g="string"===typeof c.data,l={};try{l=g?JSON.parse(c.data):c.data}catch(h){}var m=l.__tcfapiCall;m&&window.__tcfapi(m.command,m.version,function(h,x){h={__tcfapiReturn:{returnValue:h,success:x,callId:m.callId}};g&&(h=JSON.stringify(h));c&&c.source&&c.source.postMessage&&c.source.postMessage(h,"*")},m.parameter)}for(var u="__tcfapiLocator",w=[],f=window,v;f;){try{if(f.frames[u]){v=f;break}}catch(c){}if(f=== window.top)break;f=f.parent}v||(t(),f.__tcfapi=D,f.addEventListener("message",q,!1))}var y="false"!==google_tag_manager["GTM-57GHMWX"].macro(2)?google_tag_manager["GTM-57GHMWX"].macro(3):window.location.hostname,n=document.createElement("script"),z=document.getElementsByTagName("script")[0];y="https://cmp.quantcast.com".concat("/choice/",google_tag_manager["GTM-57GHMWX"].macro(4),"/",y,"/choice.js?tag_version\x3dV2");var A=0,E=3;n.async=!0;n.type="text/javascript";n.src=y;z.parentNode.insertBefore(n,z);r();var B=function(){var t=arguments;typeof window.__uspapi!== B&&setTimeout(function(){"undefined"!==typeof window.__uspapi&&window.__uspapi.apply(window.__uspapi,t)},500)};n=function(){A++;window.__uspapi===B&&A<E?console.warn("USP is not accessible"):clearInterval(F)};if("undefined"===typeof window.__uspapi){window.__uspapi=B;var F=setInterval(n,6E3)}})(); (function(){function r(a){b=",";for(var d in a)a[d]&&(b+=d+",");return b}function y(){var a=new XMLHttpRequest;a.onreadystatechange=function(){if(4==this.readyState&&200==this.status){var d=JSON.parse(this.responseText);d.hasOwnProperty("nonIabVendorList")&&(g=d.nonIabVendorList,Object.keys(g).forEach(function(k){h[g[k].vendorId]=g[k].name}))}};a.open("GET",D,!0);a.send()}function n(){var a=new XMLHttpRequest;a.onreadystatechange=function(){if(4==this.readyState&&200==this.status){var d=JSON.parse(this.responseText); d.hasOwnProperty("vendors")&&(q=d.vendors,Object.keys(q).forEach(function(k){f[q[k].id]=q[k].name}))}};a.open("GET",t,!0);a.send()}function z(a){window.dataLayer=window.dataLayer||[];window.dataLayer.push({event:"__cmpLoaded",__cmpLoaded:!0,gdpr:a.gdprApplies})}function A(a){window.dataLayer=window.dataLayer||[];a.hasOwnProperty("publisher")&&(H=r(a.publisher.consents),I=r(a.publisher.legitimateInterests));a.hasOwnProperty("purpose")&&(J=r(a.purpose.consents),K=r(a.purpose.legitimateInterests));var d= 0,k=setInterval(function(){d+=1;100===d&&clearInterval(k);q&&g&&(clearInterval(k),a.gdprApplies?a.hasOwnProperty("vendor")&&Object.keys(a.vendor.consents).forEach(function(e){if(a.vendor.consents[e]||!a.gdprApplies)c[e]=f[e],v[e]=e}):(c=f,f.forEach(function(e,C){v[C]=C})),w=c.filter(Boolean).join("|"),u=v.filter(Boolean).join(","),window.__tcfapi("getNonIABVendorConsents",2,function(e,C){C&&(e.gdprApplies?Object.keys(e.nonIabVendorConsents).forEach(function(p){if(e.nonIabVendorConsents[p]||!e.gdprApplies)G[p]= h[p],x[p]=p}):(G=h,h.forEach(function(p,L){x[L]=L})))}),m=G.filter(Boolean).join("|"),l=x.filter(Boolean).join(","),window.dataLayer.push({event:"__cmpConsents",__cmpConsents:{iabVendorConsentIds:u,iabVendorsWithConsent:w,nonIABVendorConsentIds:l,nonIABVendorsWithConsent:m,gdpr:a.gdprApplies,publisherConsents:H,publisherLegitimateInterests:I,purposeConsents:J,purposeLegitimateInterests:K}}))},100)}function E(a){google_tag_manager["GTM-57GHMWX"].macro(5)&&window.__uspapi("uspPing",1,function(d,k){var e=document.getElementById(google_tag_manager["GTM-57GHMWX"].macro(6)); k&&d.mode.includes("USP")&&d.jurisdiction.includes(d.location.toUpperCase())&&null!==e&&(e.innerHTML+='We use cookies and other data collection technologies to provide the best experience for our customers. You may request that your data not be shared with third parties here: \x3ca href\x3d"#" onclick\x3d"window.__uspapi(\'displayUspUi\');"\x3eDo Not Sell My Data\x3c/a\x3e.',e.classList.add("ccpa-msg-added"),window.__uspapi("setUspDftData",1,function(C,p){p||console.log("Error: USP string not updated!")}))})} var B=(new Date).getTime(),F=google_tag_manager["GTM-57GHMWX"].macro(7),t="https://test.cmp.quantcast.com/GVL-v2/vendor-list.json",D="https://cmp.quantcast.com".concat("/choice/",google_tag_manager["GTM-57GHMWX"].macro(8),"/",F,"/.well-known/noniab-vendorlist.json").concat("?timestamp\x3d",B),q,u,w,f=[],v=[],c=[],g,l,m,h=[],x=[],G=[],H,I,J,K;google_tag_manager["GTM-57GHMWX"].macro(9)&&(n(),y());window.__tcfapi("addEventListener",2,function(a,d){if(d)switch(a.eventStatus){case "cmpuishown":google_tag_manager["GTM-57GHMWX"].macro(10)&&z(a);break;case "tcloaded":google_tag_manager["GTM-57GHMWX"].macro(11)&& (z(a),A(a));google_tag_manager["GTM-57GHMWX"].macro(12)&&E(a);break;case "useractioncomplete":google_tag_manager["GTM-57GHMWX"].macro(13)&&A(a)}})})(); !function(b,e,f,g,a,c,d){b.fbq||(a=b.fbq=function(){a.callMethod?a.callMethod.apply(a,arguments):a.queue.push(arguments)},b._fbq||(b._fbq=a),a.push=a,a.loaded=!0,a.version="2.0",a.queue=[],c=e.createElement(f),c.async=!0,c.src=g,d=e.getElementsByTagName(f)[0],d.parentNode.insertBefore(c,d))}(window,document,"script","https://connect.facebook.net/en_US/fbevents.js");fbq("init","657434508554909"); fbq("track","PageView"); .tce4c1f91-be80-4604-86e2-8f66a678cc65 { color: #fff; background: #222; border: 1px solid transparent; } .tce4c1f91-be80-4604-86e2-8f66a678cc65.place-top { margin-top: -10px; } .tce4c1f91-be80-4604-86e2-8f66a678cc65.place-top::before { border-top: 8px solid transparent; } .tce4c1f91-be80-4604-86e2-8f66a678cc65.place-top::after { border-left: 8px solid transparent; border-right: 8px solid transparent; bottom: -6px; left: 50%; margin-left: -8px; border-top-color: #222; border-top-style: solid; border-top-width: 6px; } .tce4c1f91-be80-4604-86e2-8f66a678cc65.place-bottom { margin-top: 10px; } .tce4c1f91-be80-4604-86e2-8f66a678cc65.place-bottom::before { border-bottom: 8px solid transparent; } .tce4c1f91-be80-4604-86e2-8f66a678cc65.place-bottom::after { border-left: 8px solid transparent; border-right: 8px solid transparent; top: -6px; left: 50%; margin-left: -8px; border-bottom-color: #222; border-bottom-style: solid; border-bottom-width: 6px; } .tce4c1f91-be80-4604-86e2-8f66a678cc65.place-left { margin-left: -10px; } .tce4c1f91-be80-4604-86e2-8f66a678cc65.place-left::before { border-left: 8px solid transparent; } .tce4c1f91-be80-4604-86e2-8f66a678cc65.place-left::after { border-top: 5px solid transparent; border-bottom: 5px solid transparent; right: -6px; top: 50%; margin-top: -4px; border-left-color: #222; border-left-style: solid; border-left-width: 6px; } .tce4c1f91-be80-4604-86e2-8f66a678cc65.place-right { margin-left: 10px; } .tce4c1f91-be80-4604-86e2-8f66a678cc65.place-right::before { border-right: 8px solid transparent; } .tce4c1f91-be80-4604-86e2-8f66a678cc65.place-right::after { border-top: 5px solid transparent; border-bottom: 5px solid transparent; left: -6px; top: 50%; margin-top: -4px; border-right-color: #222; border-right-style: solid; border-right-width: 6px; } Alitools Guide βetaLink checked

      and seriously you don't mention visual code???

    1. Author Response:

      Reviewer #3 (Public Review):

      The main goals of this study by Guan, Aflalo and colleagues were to examine the encoding scheme of populations of neurons in the posterior parietal cortex (PPC) of a person with paralysis while she attempted individual finger movements as part of a brain-computer interface task (BCI). They used these data to answer several questions: 1) Could they decode attempted finger movements from these data (building on this group's prior work decoding a variety of movements, including arm movements, from PPC)? 2) Is there evidence that the encoding scheme for these movements is similar to that of able-bodied individuals, which would argue that even after paralysis, this area is not reorganized and that the motor representations remain more or less stable after the injury? 3) Related to #2: is there beneficial remapping, such that neural correlates of attempted movements change to improve BCI performance over time? 4) Can looking at the interrelationship between different fingers' population firing rate patterns (one aspect of the encoding scheme) indicate whether the representation structure is similar to the statistics of natural finger use, a somatotopic organization (how close the fingers are to each other), or be uniformly different from one another (which would be advantageous for the BCI and connects to question #3)? Furthermore, does the best fit amongst these choices to the data change over the course of a movement, indicating a time-varying neural encoding structure or multiple overlapping processes? The study is well-conducted and uses sound analysis methods, and is able to contribute some new knowledge related to all of the above questions. These are rare and precious data, given the relatively few people implanted with multielectrode arrays like the Utah arrays used in this study. Even more so when considering that to this reviewer's knowledge, no other group is recording from PPC, and this manuscript thus is the first look at the attempted finger moving encoding scheme in this part of human cortex .

      An important caveat is that the representational similarity analysis (RDA) method and resulting representational dissimilarity matrix (RDM) that is the workhorse analysis/metric throughout the study is capturing a fairly specific question: which pairs of finger movements' neural correlates are more/less similar, and how does that pattern across the pairings compare to other datasets. There are other questions that one could ask with these data (and perhaps this group will in subsequent studies), which will provide additional information about the encoding; for example, how well does the population activity correlate with the kinematics, kinetics, and predicted sensory feedback that would accompany such movements in an able-bodied person?

      What this study shows is that the RDMs from these PPC Utah array data are most similar to motor cortical RDMs based on a prior fMRI study. It's innovative to compare effectors' representational similarity across different recording modalities, but this apparent similarity should be interpreted in light of several limitations: 1) the vastly different spatial scales (voxels spanning cm that average activity of millions of neurons each versus a few mm of cortex with sparse sampling of individual neurons, 2) the vastly different temporal scales (firing rates versus blood flow), 3) that dramatically different encoding schemes and dynamics could still result in the same RDMs. As currently written, the study does not adequately caveat the relatively superficial and narrow similarity being made between these data and the prior Ejaz et al (2015) sensorimotor cortex fMRI results before except for (some) exposition in the Discussion.

      We agree that vastly different spatiotemporal scales (comments 1 and 2) limit the chances of finding correspondence between fMRI and single-neuron recordings. We have added motivation for our comparisons to the Results and Discussion sections.

      Revised text in the Results: “We note that our able-bodied model was recorded from human PC-IP using fMRI, which measures fundamentally different features (millimeter-scale blood oxygenation) than microelectrode arrays (sparse sampling of single neurons).”

      Revised text in the Discussion: “This match was surprising because single-neuron and fMRI recordings differ fundamentally; single-neuron recordings sparsely sample 102 neurons in a small region, while fMRI samples 104 – 106 neurons/voxel (Guest and Love, 2017; Kriegeskorte and Diedrichsen, 2016). The correspondence suggested that RSA might identify modality-invariant neural organizations (Kriegeskorte et al., 2008b), so here we used fMRI recordings of human PC-IP as an able-bodied model.” “This result does obscure a straightforward interpretation of the RSA results – why does our recording area match MC better than the corresponding implant location? Several factors might contribute, including differing neurovascular sensitivity to the early and late dynamic phases of the neural response (Figure 4e), heterogeneous neural organizations across the single-neuron and voxel spatial scales (Arbuckle et al., 2020; Guest and Love, 2017; Kriegeskorte and Diedrichsen, 2016), or mismatches in functional anatomy between participant NS and standard atlases (Eickhoff et al., 2018).”

      …3) that dramatically different encoding schemes and dynamics could still result in the same RDMs…

      Regarding point 3, we agree that RSA provides a second-order correspondence (Kriegeskorte et al., 2008a) rather than direct neuron-to-neuron comparisons. To supplement RSA, we also provide more detail on single-neuron responses for the reader in Figure 1–figure supplement 5. However, we believe that population metrics helpfully summarize the computational strategies of recorded brain regions (Cunningham and Yu, 2014; Saxena and Cunningham, 2019), so we focus on population comparisons here.

      Relatedly, the study would benefit from additional explanation for why the comparison is being made to able-bodied fMRI data, rather than similar intracortical neural recordings made in homologous areas of non-human primates (NHPs), which have been traditionally used as an animal model for vision-guided forelimb reaching. This group has an illustrious history of such macaque studies, which makes this omission more surprising.

      We agree that similar intracortical recordings from homologous areas of NHPs would be useful to construct an able-bodied model. While our lab has historically studied NHP reaching and grasping, we unfortunately did not perform any analogous experiments involving individuated finger movements. We have updated the Discussion to clarify this.

      Revised text in the Discussion: “We asked whether participant NS’s BCI finger representations resembled that of able-bodied individuals or whether her finger representations had reorganized after paralysis. Single-neuron recordings of PC-IP during individuated finger movements are not available in either able-bodied human participants or non-human primates. However, many fMRI studies have characterized finger representations (Ejaz et al., 2015; Kikkert et al., 2021, 2016; Yousry et al., 1997), and representational similarity analysis (RSA) has previously shown RDM correspondence between fMRI and single-neuron recordings of another cortical region (inferior temporal cortex) (Kriegeskorte et al., 2008b).”

      A second area in which the manuscript in its current form could better set the context for its reader is in how it introduces their motivating question of "do paralyzed BCI users need to learn a fundamentally new skillset, or can they leverage their pre-injury motor repertoire". Until the Discussion, there is almost no mention of the many previous human BCI studies where high performance movement decoding was possible based on asking participants to attempt to make arm or hand movements (to just list a small number of the many such studies: Hochberg et al 2006 and 2012, Collinger et al 2013, Gilja et al 2015, Bouton et al 2016, Ajiboye, Willett et al 2017; Brandman et al 2018; Willett et al 2020; Flesher et al 2021). This is important; while most of these past studies examined motor (and somatosensory) cortex and not PPC (though this group's prior Aflalo, Kellis et al 2015 study did!), they all did show that motor representations remain at least distinct enough between movements to allow for decoding; were qualitatively similar to the able-bodied animal studies upon which that body of work was build; and could be readily engaged by the user just by attempting/imagining a movement. Thus, there was a very strong expectation going into this present study that the result would be that there would be a resemblance to able-bodied motor representational similarity. While explicitly making this connection is a meaningful contribution to the literature by the present study (and so is comparing it to different areas' representational similarity), care should be taken not to overstate the novelty of retained motor encoding schemes in people with paralysis, given the extensive prior work.

      We agree that multiple previous BCI studies instruct participants to attempt arm/hand movements and that these studies are important to discuss. We have updated the Introduction/Discussion to include these references.

      Our work does fill in two important gaps in the existing literature. First, prior BCI studies had shown general resemblance between able-bodied and BCI movement, but previous human BCI studies had not shown whether the details of pre-injury representations are preserved. We have also updated the manuscript to describe a second motivation: that outside of the BCI community, neuroscientists do not agree on whether BCI studies of tetraplegic humans generalize to able-bodied movement, given the potential for reorganization after severe injury. In the Discussion sections of several recent BCI studies (Armenta Salas et al., 2018; Fifer et al., 2021; Flesher et al., 2016; Stavisky et al., 2019; Willett et al., 2020), the authors addressed whether the newly discovered phenomena were simply artifacts of reorganization (we believe not).

      Revised text in the Introduction: Understanding plasticity is necessary to develop brain-computer interfaces (BCIs) that can restore sensorimotor function to paralyzed individuals(Orsborn et al., 2014). First, paralysis disrupts movement and blocks somatosensory inputs to motor areas, which could cause neural reorganization (Jain et al., 2008; Kambi et al., 2014; Pons et al., 1991). Second, BCIs bypass supporting cortical, subcortical, and spinal circuits, fundamentally altering how the cortex affects movement. Do these changes require paralyzed BCI users to learn fundamentally new motor skills (Sadtler et al., 2014), or do paralyzed participants use a preserved, pre-injury motor repertoire (Hwang et al., 2013)? Several paralyzed participants have been able to control BCI cursors by attempting arm or hand movements (Ajiboye et al., 2017; Bouton et al., 2016; Brandman et al., 2018; Collinger et al., 2013; Gilja et al., 2015; Hochberg et al., 2012, 2006), hinting that motor representations could remain stable after paralysis. However, the nervous system’s capacity for reorganization (Jain et al., 2008; Kambi et al., 2014; Kikkert et al., 2021; Pons et al., 1991) still leaves many BCI studies speculating whether their findings in tetraplegic individuals also generalize to able-bodied individuals (Armenta Salas et al., 2018; Fifer et al., 2021; Flesher et al., 2016; Stavisky et al., 2019; Willett et al., 2020). A direct comparison, between BCI control and able-bodied neural control of movement, would help address questions about generalization.

      In the revised Discussion, we further contextualize our study in the prior work. In particular, as BCI studies have made fundamental neuroscience discoveries, they have had to address whether their results generalize to able-bodied individuals. Direct comparisons between able-bodied movement and tetraplegic BCI movement, like our study, help to bridge this gap.

      Revised text in the Discussion: Early human BCI studies (Collinger et al., 2013; Hochberg et al., 2006) recorded from the motor cortex and found that single-neuron directional tuning is qualitatively similar to that of able-bodied non-human primates (NHPs) (Georgopoulos et al., 1982; Hochberg et al., 2006). Many subsequent human BCI studies have also successfully replicated results from other classical NHP neurophysiology studies (Aflalo et al., 2015; Ajiboye et al., 2017; Bouton et al., 2016; Brandman et al., 2018; Collinger et al., 2013; Gilja et al., 2015; Hochberg et al., 2012), leading to the general heuristic that the sensorimotor cortex retains its major properties after spinal cord injury (Andersen and Aflalo, 2022). This heuristic further suggests that BCI studies of tetraplegic individuals should generalize to able-bodied individuals. However, this generalization hypothesis has so far lacked direct, quantitative comparisons between tetraplegic and able-bodied individuals. Thus, as human BCI studies expand beyond replicating results and begin to challenge conventional wisdom, neuroscientists have questioned whether cortical reorganization could influence these novel phenomena (see Discussions of (Andersen and Aflalo, 2022; Armenta Salas et al., 2018; Chivukula et al., 2021; Fifer et al., 2021; Flesher et al., 2016; Stavisky et al., 2019; Willett et al., 2020)). As an example of a novel discovery, a recent BCI study found that the hand knob of tetraplegic individuals is directionally tuned to movements of the entire body (Willett et al., 2020), challenging the traditional notion that primary somatosensory and motor subregions respond selectively to individual body parts (Penfield and Boldrey, 1937). Given the brain’s capacity for reorganization (Jain et al., 2008; Kambi et al., 2014), could these BCI results be specific to cortical remapping? Detailed comparisons with able-bodied individuals, as shown here, may help shed light on this question.

      The final analyses in the manuscript are particularly interesting: they examine the representational structure as a function of a short sliding analysis window, which indicates that there is a more motoric representational structure at the start of the movement, followed by a more somatotopic structure. These analyses are a welcome expansion of the study scope to include the population dynamics, and provides clues as to the role of this activity / the computations this area is involved in throughout movement (e.g., the authors speculate the initial activity is an efference copy from motor cortex, and the later activity is a sensory-consequence model).

      An interesting result in this study is that the participant did not improve performance at the task (and that the neural representations of each finger did not change to become more separable by the decoder). This was despite ample room for improvement (the performance was below 90% accuracy across 5 possible choices), at least not over 4,016 trials. The authors provide several possible explanations for this in the Discussion. Another possibility is that the nature of the task impeded learning because feedback was delayed until the end of the 1.5 second attempted movement period (at which time the participant was presented with text reporting which finger's movement was decoded). This is a very different discrete-and-delayed paradigm from the continuous control used in prior NHP BCI studies that showed motor learning (e.g., Sadtler et al 2014 and follow-ups; Vyas et al 2018 and follow-up; Ganguly & Carmena 2009 and follow-ups). It is possible that having continuous visual feedback about the BCI effector is more similar to the natural motor system (where there is consistent visual, as well as proprioceptive and somatosensory feedback about movements), and thus better engages motor adaptation/learning mechanisms.

      We agree that different BCI paradigms could better engage motor adaptation and learning, although it is interesting that participant NSS did not improve her performance simply by attempting “natural” finger movements. To better caveat our findings, we have revised our manuscript as suggested.

      Revised text in the Discussion: “The stability of finger representations here suggests that BCIs can benefit from the pre-existing, natural repertoire (Hwang et al., 2013), although learning can play an important role under different experimental constraints. In our study, the participant received only a delayed, discrete feedback signal after classification (Figure 1a). Because we were interested in understanding participant NS’s natural finger representation, we did not artificially perturb the BCI mapping. When given continuous feedback, however, participants in previous BCI studies could learn to adapt to within-manifold perturbations to the BCI mapping (Ganguly and Carmena, 2009; Sadtler et al., 2014; Sakellaridi et al., 2019; Vyas et al., 2018). BCI users can even slowly learn to generate off-manifold neural activity patterns when the BCI decoder perturbations were incremental (Oby et al., 2019). Notably, learning was inconsistent when perturbations were sudden, indicating that learning is sensitive to specific training procedures. So far, most BCI learning studies have focused on two-dimensional cursor control. To further understand how much finger representations can be actively modified, future studies could benefit from perturbations (Kieliba et al., 2021; Oby et al., 2019), continuous neurofeedback (Ganguly and Carmena, 2009; Oby et al., 2019; Vyas et al., 2018), and additional participants.”

      Overall the study contributes to the state of knowledge about human PPC cortex and its neurophysiology even years after injury when a person attempts movements. The methods are sound, but are unlikely (in this reviewer's view) to be widely adopted by the community. Two specific contributions of this study are 1) that it provides an additional data point that motor representations are stable after injury, lowering the risk of BCI strategies based on PPC recording; and 2) that it starts the conversation about how to make deeper comparisons between able-bodied neural dynamics and those of people unable to make overt movements.

    1. So it’s just me being indulgent. If I’ll have something that I have in a folder and I can’t find a way to fit it in that isn’t distracting or annoying for the reader, I’ll put it in a footnote.

      Annotation marks “being indulgent.” </br> "If I’ll have something that I have in a folder and I can’t find a way to fit it in that isn’t distracting or annoying for the reader, I’ll put it in a footnote.” From a lovely 2014 Mental Floss interview with author Mary Roach. #Annotate22 209/365

    1. maya let me know that there is actually some history of composers recording piano rolls in the early 20th century (i.e. "classical" composers, not just ragtime or saloon music, which is the context in which I usually think of piano rolls being used)—and music-knowers don't think of them any differently than a traditionally recorded performance. Neat! Yet another reason I should've waited to blog about it instead of firing off a post, because I might've thought to look that sort of thing up first. Then again, I might not've. Don't know what I don't know, and all that.

      Another dynamic, though: I might not have thought to reply with the fun fact to a blog post, because it seems heavier-weight, less chattily conversational. I should likely adjust my tooling to make this easier, but I'll bet there are others for whom the difference in likelihood of response is even more pronounced. Maybe it's good to do one's tentative workshopping in public if you think other people chiming in might be useful!

    1. And since they can already create any sort of document in a tool that requires no abstraction, it's just a hard sell.

      !- searched - for : WYSIWYM - abstraction is a hard sell

    1. It just is a positive feedback process that passes through some threshold and goes critical. And so I would say that’s the sense [in which] capitalism has always been there. It’s always been there as a pile with the potential to go critical, but it didn’t go critical until the Renaissance, until the dawn of modernity, when, for reasons that are interesting, enough graphite rods get pulled out and the thing becomes this self-sustaining, explosive process.

      In an earlier essay, "Meltdown", he said

      The story goes like this: Earth is captured by a technocapital singularity as renaissance rationalitization and oceanic navigation lock into commoditization take-off. Logistically accelerating techno-economic interactivity crumbles social order in auto-sophisticating machine runaway. As markets learn to manufacture intelligence, politics modernizes, upgrades paranoia, and tries to get a grip.

    1. Krishna Gade took a job at Facebook just after the 2016 election, working to improve news-feed quality. While there, he developed a feature, called “Why am I seeing this post?,” that allowed a user to click a button on any item that appeared in her Facebook feed and see some of the algorithmic variables that had caused the item to appear. A dog photo might be in her feed, for example, because she “commented on posts with photos more than other media types” and because she belonged to a group called Woofers & Puppers. Gade told me that he saw the feature as fostering a sense of transparency and trust. “I think users should be given the rights to ask for what’s going on,” he said. At the least, it offered users a striking glimpse of how the recommender system perceived them. Yet today, on Facebook’s Web site, the “Why am I seeing this post?” button is available only for ads. On the app it’s included for non-ad posts, too, but, when I tried it recently on a handful of posts, most said only that they were “popular compared to other posts you’ve seen.”

      This is the kind of requirement I wish they'd put in. I should be able to know how an automated decision to show me something was arrived at

    1. We don’t expect National Defence or health care to promote growth: we just accept that territorial integrity and a healthy populace are good things.

      Been making that point about health (especially since, like education, it's a provincial jurisdiction). It's easy to think of perverse incentives if a profit motive dominates education and health. Physicians would want people to remain sick and teachers would prefer it if learners required more assistance.

      Hadn't thought enough about the DND part. Sure gives me pause, given the amounts involved. Or the fact that there's a whole lot of profit made in that domain.

      So, businesspeople are quick to talk about "cost centres". Some of them realize that those matter a whole lot.

    1. You will agree with me that the necessary information about persons in the position of Lady Verinder and Mr. Blake, would be perfectly easy information to obtain.

      It's important to note that it is perfectly legal for anyone to see the will of any dead person as it's considered a public document. The person requesting the will would just have to pay a fee at a public office.

    2. we looked at the tide, oozing in smoothly, higher and higher, over the Shivering Sand.

      There is a description of nature between the conversation. It's relevant to the two characters. The choppy water expressed their complex and subtle feelings. Franklin was shock and drew the inference that the accidents may not occasional coincidence. To Betteredge, this was the first time he heard the story just like us. The time when he looked at the tide is the time when he sorted out thoughts.

    3. I declare, on my word of honour, that what I am now about to write is, strictly and literally, the truth

      This lets us know that our narrator(s) are retelling events that happened and will be reflecting throughout the novel. What's interesting is that they say it's nothing but "the truth" but if this novel is all from multiple perspectives, then there is going to be some form of bias with each account. Just something to take into account that this is how they interpreted every event and we, the readers, will most likely have to actively put together some pieces while reading to try to fully understand what happened.

    1. Yes, it’s making it easier than ever to write code collaboratively in the browser with zero configuration and setup. That’s amazing! I’m a HUGE believer in this mission.

      Until those things go away.

      A case study: DuckDuckHack used Codio, which "worked" until DDG decided to call it a wrap on accepting outside contributions. DDG stopped paying for Codio, and because of that, there was no longer an easy way to replicate the development environment—the DuckDuckHack repos remained available (still do), but you can't pop over into Codio and play around with it. Furthermore, because Codio had been functioning as a sort of crutch to paper over the shortcomings in the onboarding/startup process for DuckDuckHack, there was never any pressure to make sure that contributors could easily get up and running without access to a Codio-based development environment.

      It's interesting that, no matter how many times cloud-based Web IDEs have been attempted and failed to displace traditional, local development, people keep getting suckered into it, despite the history of observable downsides.

      What's also interesting is the conflation of two things:

      1. software that works by treating the Web browser as a ubiquitous, reliable interpreter (in a way that neither /usr/local/bin/node nor /usr/bin/python3 are reliably ubiquitous)—NB: and running locally, just like Node or Python (or go build or make run or...)—and

      2. the idea that development toolchains aiming for "zero configuration and setup" should defer to and depend upon the continued operation of third-party servers

      That is, even though the Web browser is an attractive target for its consistency (in behavior and availability), most Web IDE advocates aren't actually leveraging its benefits—they still end up targeting (e.g.) /usr/local/bin/node and /usr/local/python3—except the executables in question are expected to run on some server(s) instead of the contributor's own machine. These browser-based IDEs aren't so browser-based after all, since they're just shelling out to some non-browser process (over RPC over HTTP). The "World Wide Wruntime" is relegated to merely interpreting the code for a thin client that handles its half of the transactions to/from said remote processes, which end up handling the bulk of the computing (even if that computing isn't heavyweight and/or the client code on its own is full of bloat, owing to the modern trends in Web design).

      It's sort of crazy how common it is to encounter this "mental slippery slope": "We can lean on the Web browser, since it's available everywhere!" → "That involves offloading it to the cloud (because that's how you 'do' stuff for the browser, right?)".

      So: want to see an actual boom in collaborative development spurred by zero-configuration dev environments? The prescription is straightforward: make all these tools truly run in the browser. The experience we should all be shooting for resemble something like this: Step 1: clone the repo Step 2: double click README.html Step 3: you're off to the races—because project upstream has given you all the tools you need to nurture your desire to contribute

      You can also watch this space for more examples of the need for an alternative take on working to actually manage to achieve the promise of increased collaboration through friction-free (or at least friction-reduced) development: * https://hypothes.is/search?q=%22the+repo+is+the+IDE%22 * https://hypothes.is/search?q=%22builds+and+burdens%22

    1. Note: This rebuttal was posted by the corresponding author to Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      Manuscript number: RC-2022-01501

      Corresponding author(s): Prachee Avasthi

      [The “revision plan” should delineate the revisions that authors intend to carry out in response to the points raised by the referees. It also provides the authors with the opportunity to explain their view of the paper and of the referee reports.

      • *

      The document is important for the editors of affiliate journals when they make a first decision on the transferred manuscript. It will also be useful to readers of the reprint and help them to obtain a balanced view of the paper.

      • *

      If you wish to submit a full revision, please use our "Full Revision" template. It is important to use the appropriate template to clearly inform the editors of your intentions.]

      1. General Statements [optional]

      This section is optional. Insert here any general statements you wish to make about the goal of the study or about the reviews.

      • *

      We thank the reviewers for their careful reading and evaluation of our manuscript. The reviewers have emphasized the need for several important changes which we plan to address.

      First, they request better evidence and specificity of the BCI target in Chlamydomonas. We have created double mutants between the dusp6 ortholog mutants and found severe defects in ciliogenesis similar to what we see with BCI treatment. We plan to include this data in the paper as well as the subsequent analyses we performed with the single dusp6 ortholog mutants. This data will provide stronger evidence that this pathway regulates ciliary length in Chlamydomonas aside from the other potential off target effects that could be impacting this pathway that we may be seeing through the use of BCI.

      Second, the reviewers have requested more consistency and clarity both in statistics and descriptions of the data and to expand upon our findings in the discussion. We will create a clear guideline for our use of statistics and adjust the descriptions of the data to fit this guideline more strictly and prevent overstating/oversimplifying results. We will also add more discussion and information related to off target effects of BCI, the importance of the subtle defects in NPHP4 protein expression in the transition zone, and the relevancy of the membrane trafficking data in light of this study.

      2. Description of the planned revisions

      Insert here a point-by-point reply that explains what revisions, additional experimentations and analyses are planned to address the points raised by the referees.


      Reviewer #1 (Evidence, reproducibility and clarity (Required)):____


      SUMMARY:____


      The authors investigated the effects of an allosteric inhibitor of DUSP (BCI) on cilia length regulation in Chlamydomonas. Among seven conclusions summarized in Fig. 7, BCI is found to severely disrupt cilia regeneration and microtubule reorganization. Additionally, changes in kinesin-II dynamic, ciliary protein synthesis, transition zone composition and membrane trafficking are also explored. All these aspects have been shown to affect cilia length regulation. Findings from this body of work may give insights on how MAPK, a major player in cilia length regulation, functions in various avenues. Additionally, the study of BCI and other specific phosphatase inhibitors may provide a unique addition to the toolset available to uncover this important and complicated mechanism.

      MAJOR COMMENTS

      Major comment 1____

      The addition of BCI increases phosphorylated MAPK in Chlamydomonas based on Fig 1B. However, the claim that BCI inhibits Chlamydomonas MKPs is not supported at all. SF1A shows CrMKP2, 3 and 5 are related to each other but distant from HsDUSP6 and DrDUSP6. At the same time, 2 out 3 predicted BCI interacting residues are different from the Hs and Dr DUSP6 in SF1B, contradicting "well conserved" in line 172. Consistently, mutants of these orthologs have little to no ciliary length and regeneration defects compared to BCI treatment (see major comment 6 about statistical significance). I am not convinced that BCI inhibits the identified orthologs or any MKPs in Chlamydomonas. It's possible that BCI inhibits a broad range of phosphatases including the ones listed and/or those for upstream kinases. But such a point is not demonstrated by the presented data.

      While BCI is predicted to interact with these residues, it is also predicted to interact with the “general acid loop backbone” by fitting in between the a7 helix and the acid loop backbone (Molina et al., 2009).

      MKP2 has ciliary length defects compared to wild type, though it regenerates normally. In addition, we have crossed these mutants together and have found that cells (2x3 12.2 and 3x5 29.4) cannot generate cilia. We will include this data in the supplement and perform follow up analyses on these double mutants. Because these structures are not 100% conserved, and we have changed the text to “partially conserved” to reflect this, it is possible that BCI is hitting all of these DUSPs rather than just one, or the DUSPs may serve compensatory functions that rescue ciliary length.

      Major comment 3____

      The claims that "BCI inhibits KAP-GFP protein expression" (line 271) and "BCI inhibits ciliary protein synthesis" (line 286) are not convincingly demonstrated. Overlooking that only KAP is investigated instead of kinesin-II, none of the relative intensity from the WB in 30 or 50 µM BCI and the basal body fluorescence intensity indicates a statistically significant difference. The washout made no difference in any of the assay and it's not explained how phosphatase inhibition by BCI might affect overall ciliary protein synthesis. The claims about protein expression may need a fair amount of effort and time investment to demonstrate, therefore I suggest leaving these out for this manuscript.

      Though it's very interesting to see that in SF 2C cilia in 20 µM BCI treatment can regeneration slowly. Line 162, the author claimed "In the presence of (30 µM) BCI, cilia could not regenerate at all (Fig 1E)". Since Fig 1E only extends to 2 hours, I think it's important to clarify if in 30 µM BCI cilia indeed can not generate even after 6 or 8 hours.

      We have altered the text to be more specific with our wording that KAP-GFP is investigated rather than kinesin-2, and we have added text to indicate that downstream phosphorylation events could impact transcription and translation of proteins necessary for ciliary maintenance. This interpretation of the data mentioned above is correct; KAP-GFP is not significantly altered at the basal bodies or in accordance with the steady state western blots. What we see here and demonstrated in Figure 2F-I is the depleted KAP-GFP protein which is not restored following a 2 hour regeneration in BCI. We likely do not see a difference in steady state conditions because the protein is not degraded, just being moved around in the cell. We can only see the difference when the majority of KAP-GFP, which the data suggests is mostly present in cilia, is physically removed through ciliary shedding. This protein is not replaced during a 2 hour regeneration which allows us to conclude that this protein is inhibited due to BCI.

      The washout made a small difference in the double regeneration whereby we begin to see cilia begin to form in washed out conditions, though this was not statistically significant. It is possible that BCI has a potent effect on the cell similar to how other drugs, such as colchicine, cannot be easily washed out. The purpose here is to show that regardless of the statistical significance, cells can begin to regenerate their cilia after BCI washout, though this occurs 4 hours after washout in doubly regenerated cells, and we do not see this potent effect on the singly regenerated cells in SF 2C. Though in SF2C, as mentioned, we do see slowly growing cilia, and this could, once again, be due to the potent inhibition BCI has on ciliary protein synthesis. We will confirm and clarify if 30 µM BCI cannot regenerate even after 6 or 8 hours.

      Major comment 5____

      It is very interesting that BCI disrupts microtubule reorganization induced by deciliation and colchicine. Data in Fig 6B and C are presented differently than those in SF 4C. For example, in SF 4C, BCI treatment for 60 min has close to 50 % cells with microtubule partially reorganized while in Fig 6C about 20% cells with microtubule fully (or combined?) reorganized. The nature of the difference is unclear to me without an assay comparing the two directly. Hence the implied claim that BCI affects colchicine induced microtubule reorganization differently than deciliation induced one is hard to interpret (line 398, line 388 vs line 403).


      The fact that taxol doesn't rescue cilia regeneration defect by BCI is very interesting. Here taxol treatment results in fully regenerated cilia while Junmin Pan's group (Wang et. al., 2013) reported much shorter regenerated cilia. It might be worthwhile to compare the experimental variance as this is a key data point in both instances. The relationship between cilia regeneration and microtubule dynamic is not in one direction. On one side, there's a significant upregulation of tubulin after deciliation. While many microtubule depolymerization factors such as katanin, kinesin-13 positively regulate cilia assembly (though not without exceptions). It is hard to determine that the BCI induced cilia regeneration defect can't be rescued by other forms of microtubule stabilization. Microtubule reorganization is one of the most striking defects related to BCI treatment. I suggest changing the oversimplified claim to a more limited one (such as "PTX stabilized microtubule ...") and an expansion on the discussion about microtubule dynamics and cilia length regulation beyond the use of taxol. Meanwhile, I strongly encourage authors to continue to investigate this aspect and its connection to the cilia regeneration.

      We will remove data regarding “partially” formed cytoplasmic microtubules and only include fully formed for each of these experiments for clarity.

      It is important to note the different taxol concentration used here. While Wang et al., 2013 used 40 µM taxol to study ciliary affects, we use 15 µM where stabilization still occurs. There have been reports of varied cell responses to higher vs. lower doses of taxol (see Ikui et al., 2005, Pushkarev 2009, Yeung 1999) mostly with regards to the cell’s mitotic/apoptotic response. We could be seeing altered responses at this lower concentration because Chlamydomonas cells also behave differently in higher vs. lower taxol concentrations. Thank you for your suggestions. We have adjusted the text to be more specific to PTX treatment as opposed to general stabilization.

      Major comment 7:____

      There are several places where the technical detail or presentation of the data are missing or clearly erroneous.

      Fig 1B: pMAPK and MAPK antibodies used in the WB are not described in the Material and methods. It's not clear if the same #9101, CST antibody used for RPE1 cell in Fig 1J is used.

      We have updated the materials and methods to include that this antibody was used for both RPE1 and Chlamydomonas cells.


      line 260 and Fig 3A state 20 µM BCI was used while Fig 3 legend repeatedly states 30 µM until (J). Also 30 µM in SF 2A.

      We have corrected the text to 20 µM BCI in the mentioned places.

      Fig 6C, the two lines under p value on top mostly likely start from the second column (B) instead of the first (D). Fig 6G, the line is perhaps intended for the second and fourth columns?

      We will make these comparisons more clear. We had performed a chi-square analysis and were comparing the difference between DMSO and BCI before PTX stabilization or MG132 treatment to after. We will add brackets to more clearly show these comparisons.

      Fig 6C, legends indicate bars representing each category. But only one bar is shown for each column. Same for 6G?

      This is the same as the previous comment for the way we represented the statistics. We will make this clearer with brackets to show the comparisons.

      Minor comments:____

      1. A number of small errors in text were noted above. Done.

      "orthologs" is misused in place of "ortholog mutants": line 176, 352, 421 (first), 879, 882, 898, 902, 938 , 939.

      Done.

      Capital names is misused as mutant names (e.g. "MKP2"should be "mkp2"): line 178, SF 1C, 1D and 1E, SF 3C, SF 6A

      Done.

      At several places such statistical analysis lines indicated are chosen confusingly. A simplest example is in Fig 1D, the comparison between 0 to 45 is less important than 0 to 30. Same as in Fig 1H, 1I. The line ends are inconsistent as well. They either end in the middle or the edge of the columns/data points (such as in SF 4B) and some with vertical lines (SF 2B, SF 4A, SF 6B). I suggest adding vertical lines pointing to the middle to indicate the compared datasets clearly.

      Thank you for this suggestion. We agree and will update the figures to reflect this and provide clarity for statistical comparisons.

      line 101 remove "the"

      Done.

      line 120 "modulate" to "alter"

      Done.

      line 198 "N=30" should be "N=3"

      Done.

      line 212. The legend for p value is likely for (G)

      Done.

      line 284, "singly" should be "single"

      Done.

      The dataset for "Pre" and "0m" in Fig 6D and 6E are clearly the same. Consider combining the two as in Fig 6C.

      This is correct. We will combine the data sets.

      Fig 6E, "BCI" on the X-axis should be "DMSO".

      This is correct. We will correct this.

      line 685, remove "?".

      Done.

      line 894: "Fig 3J" instead of "Fig 3H"

      Done.

      SF 1 legend, (C) and (D) are inverted.

      Done.

      SF 4A "Recovered" should be "Full"

      Done.

      SF 5, row 5, under second arrow perhaps missing +PTX

      Done. We greatly appreciate this close reading of the text and the list of changes making these errors easy to find. We will make these changes in the manuscript.

      Reviewer #1 (Significance (Required)):____


      Increasing evidence indicates that several MAPKs activated by phosphorylation negatively control cilia length while few studies focus on how MAPK dephosphorylation affects cilia length regulation, largely due to the unknown identity of the phosphatase(s) specifically involved in cilia length regulation. The authors set out to investigate the effect of BCI on cilia length control. BCI specifically inhibits DUSP1 and DUSP6, both of which are known MAPK phosphatase, and therefore may provide a unique opportunity to understand how MAPK pathway is controlled by specific phosphatase(s) activity in cilia length regulation.


      Overlooking some inconclusive results and oversimplified interpretations, I find the most striking findings are the BCI's effects including ciliogenesis, kinesin-2 ciliary dynamics and microtubule reorganization. I believe these findings have significant relevance to the stated goal (line 131) and conclusions (line 57) and readers may find them a good starting point for further investigation of the role phosphatases play in cilia length regulation.

      Cilia length regulation is a complicated mechanism that is affected by many aspects of the cell and functions differently in various systems. My field of expertise may be summarized by cilia biology, cilia length regulation, IFT, kinesin, kinases (MAPKs), microtubules. The membrane trafficking's role in cilia length regulation is somewhat unfamiliar to me. Additionally, the authors used a number of statistical tests and corrections in various assays. The nuance of these choices is not clear to me and neither explained to general readers.

      Reviewer #2 (Evidence, reproducibility and clarity (Required)):

      In their manuscript, "ERK pathway activation inhibits ciliogenesis and causes defects in motor behavior, ciliary gating, and cytoskeletal rearrangement," Dougherty et al investigate how BCI, an activator of MAPK signaling, regulates ciliary length. Despite advances in our understanding of the structure and function of cilia, a fundamental question remains as to what are the mechanisms that control ciliary length. This is a critical question because cilia undergo dynamic changes in structure during the cell cycle where they must disassemble as they enter the cell cycle and must rebuild after cell division. This work contributes to a growing body of work to determine mechanisms that regulate cilia length.

      The authors use a well-established model system, Chlamydomonas, to study cilia dynamics. This work expands on previous findings from these authors that inhibition of MAPK signaling using U0126 lengthens cilia as well as other publications that implicate MAPK signaling in controlling ciliary length. However, the authors only observe a few significant phenotypes with other subtle trends, leaving the conclusion regarding the role of MAPK signaling murky. Furthermore, it is unclear through what mechanism BCI impacts ciliary length. Several issues must be addressed:

      MAJOR ISSUES

      1. The basis for this study is the use of the ERK activator BCI, which the authors show activates MAPK signaling. While the authors do use putative DUSP6 ortholog mutants to corroborate some of the phenotypes, the majority of the data (and conclusions) uses BCI. However, there may be off target effects and the authors do not address this limitation of the study. The authors only use 1 pharmacological tool to manipulate MAPK signaling, so it is unclear whether these ciliary disruptions are specifically due to increased MAPK. It is necessary to clarify the following questions about BCI action to interpret the results:
      2. ____a.____ What are off target effects of BCI? Does BCI impact proliferation? Why is the BCI phenotype of cilia shortening transient and dose dependent? Why does the phenotype of cilia length and regeneration capacity in Chlamydomonas differ from both ortholog mutants and hTERT-RPE1 cells? While we do mention following supplemental figure 1 that other MKPs could be the target for BCI, we also cite Molina et al., 2009 who showed specificity for BCI hydrochloride in zebrafish. BCI targets primarily DUSP6, but also exhibited some activity towards DUSP1. In this study, the authors had also used zebrafish embryos to check expression of 2 other FGF inhibitors, spry 4 and XFD, in the presence of BCI but found that their effects were not reversed. In addition, they checked the ability for BCI to suppress activity of other phosphatases including Cdc25B, PTP1B, or DUSP3/VHR and found that BCI could not suppress these phosphatases. BCI inhibition has previously been found to be more specific to MAPK phosphatases. In addition, we have previously confirmed that U0126 has a slight lengthening effect on Chlamydomonas which further implicates this pathway in cilium length tuning (Avasthi et al. 2012).

      While cell proliferation assays maybe provide more support for MAPK signaling, it does not clarify lack of off target effects that could also contribute to this same phenotype. We do provide a cell proliferation assay for RPE1 cells where we show that higher concentrations of BCI result in cellular senescence as well (Fig 1I).

      The BCI phenotype of cilia shortening is likely transient and dose dependent due to its effect on ciliary protein synthesis demonstrated in Figure 3J. The increase in drug likely increases its substrate binding to exert its effects on the cell faster, even if this includes off target proteins.

      In RPE1 cells, we are likely seeing differences in regeneration capacity potentially due to their different mechanisms of ciliogenesis (RPE1 cells partake in intracellular ciliogenesis where axonemal assembly begins in the cytosol whereas Chlamydomonas cells partake in extracellular ciliogenesis where axonemal assembly begins after basal bodies dock to the apical membrane), or it could be that we’re missing a delay in regeneration in RPE1 cells after waiting 48 hours for ciliogenesis. We do not check this process sooner. There may be a defect that cells overcome. Additionally, among ortholog mutants and RPE1 compared to BCI-treated wild-type Chlamydomonas, there indeed could be off target effects or the drug could be targeting all of these MKPs rather than just one. We will add this to the discussion for clarity.

      Reviewer #2 (Significance (Required)):


      see above

      Reviewer #3 (Evidence, reproducibility and clarity (Required)):

      SUMMARY:

      In this study, the authors used a pharmacological approach to explore the function of ERK pathway in ciliogenesis. It has been reported that the alteration of FGF signaling causes abnormal ciliogenesis in several animal models including Xenopus, zebrafish, and mice. However, it remains elusive the molecular detail of how ERK pathway is associated with cilia assembling process. The authors found that the ERK1/2 activator/DUSP6 inhibitor, BCI inhibits ciliogenesis, highlighting the importance of ERK during ciliogenesis. Overall, this paper is well written, data are solid and convincing. This paper will be of great interest to many researchers who are interested in understanding ciliogenesis. The following comment is not mandatory requests but suggestions to improve the paper's significance and impact.

      MAJOR COMMENTS:

      - Combination of chemical blocker experiments were well controlled and data are solid. The authors are aware of the side effects of BCI, thus they carefully characterized the phenotypes of Mkp2/3/5 in Chlamydomonas. This reviewer wonders if the levels of ERK1/2 phosphorylation are activated in these mutants. Did the authors examine the levels of ERK1/2 phosphorylation in these mutants?

      While we do not include the data showing ERK activation in these mutants, we have checked pMAPK activation and found that it is not significantly upregulated in these mutants. This could likely be due to compensatory pathways preventing persistent pMAPK activation. For example, constant ERK activation can lead to negative feedback to regulate this signal for cell cycle progression (Fritsche-Guenther et al., 2011). The ERK pathway has not been fully elucidated in Chlamydomonas, but it is possible that these similar mechanisms are in place for MAPKs. We will include this data in the supplement.

      Reviewer #3 (Significance (Required)):


      Accumulated studies suggest that the FGF signaling pathway plays a pivotal role in ciliogenesis. Disruption of either FGF ligands or its FGF receptor results in defective ciliogenesis in Xenopus and zebrafish. On the other hand, FGF signaling negatively controls the length of cilia in chondrocytes that would cause skeletal dysplasias seen in achondroplasia. Therefore, there is strong evidence suggesting that FGF signaling participates in ciliogenesis in cell-type and tissue-context dependent manners. However, the detailed mechanism of the downstream of FGF signaling in ciliogenesis is still unclear. In this regard, this paper is beneficial for the cilia community to expand the knowledge of how ERK1/2 kinase contributes to the regulation of ciliogenesis.


      This reviewer therefore suggests that the authors may want to add more discussion to explain how their finding possibly moves the field forward to understand the pathogenesis of multiple ciliopathies.

      We will add a description of this to the discussion.

      3. Description of the revisions that have already been incorporated in the transferred manuscript

      Please insert a point-by-point reply describing the revisions that were already carried out and included in the transferred manuscript. If no revisions have been carried out yet, please leave this section empty.

      Reviewer 1:


      Major comment 4____

      A single panel in Fig 4A also can't support the shift in protein density in the TZ in line 317. As line 324 implies protein synthesis defect by BCI, the very minor (in amount and significance) reduction of the NPHP4 fluorescence should not be interpreted as any disruption at all to the transition zone. I suggest checking other TZ proteins such as CEP290 etc or leave this section out.

      Also, The additive effect from BFA and BCI treatment in Fig 5A suggests BCI affects cilia length independent of Golgi. The "actin puncta" and arpc4 mutant are not sufficiently introduced. And more importantly, how increase in the actin puncta explains the shorter cilia length caused by BCI while actin puncta are absent in arpc4 mutant with shorter cilia? Also, the Arl6 fluorescence signal "increase" is not significant in either time point. I suggest leaving this section out as well.

      We agree that one EM image cannot support a protein shift and have removed our observation in the text. However, we do see a statistically significant decrease in NPHP4 fluorescence in BCI treated cells which we consider a disruption in the sense that the structural composition is altered. We will change the word “disruption” to “alteration” for clarity. Though this is a minor defect, we believe it is still worth noting. We believe this data still adds to the model that though the EM-visible structure is unaltered, finer details within the transition zone are indeed altered and we cannot rule out that these smaller changes are not impacting protein entry into cilia. Awata et al. 2014 shows that NPHP4 is important for controlling trafficking of ciliary proteins at the transition zone, and its loss from the transition zone has been found to have effects in ciliary protein composition. Because we see decreased NPHP4 expression, we believe this is a notable finding as we see effects on the abundance of a protein which is known to affect ciliary protein composition and have therefore chosen to leave the data in the manuscript. We will adjust the language to most accurately describe our findings.

      We also agree with the interpretation that the additive effect seen from BFA and BCI treatment could suggest independent pathway collapse separate from the Golgi which we have mentioned in the manuscript.

      We have provided more information to introduce actin puncta and ARPC4 with regards to membrane trafficking. Bigge et al. 2020 shows that ARPC4, a subunit of the ARP2/3 complex which is an actin binding protein important for nucleating actin branches, has a role in ciliary assembly. ARPC4 mutants have repressed ability to regenerate their cilia. One feature they noticed in regenerating cells is the immediate formation of actin puncta which are reminiscent of yeast endocytic pits. This observation in addition to altered membrane uptake pathways in Chlamydomonas suggests that ciliogenesis involves reclaiming plasma membrane for use in ciliogenesis (because of the diffusion barrier preventing a contiguous membrane). Here, we incorporate this assay to assess the ability for the cell to reclaim membrane during BCI treatment and find that there is increased actin puncta. This could indicate that there is increased number of endocytic pits or alternatively that the lifetime of these pits is increased (perhaps due to incomplete endocytosis) such that we are able to detect more of them at a fixed point in time. While we cannot say which is happening here, we have previously found that these actin puncta are likely endocytic and needed to reclaim membrane for early ciliogenesis. An increase in these puncta may suggest dysregulated endocytosis in one way or another. ARPC4 cells cannot form the actin puncta in the first place, whereas we are seeing defects following puncta formation. We have taken out the Arl6 data.

      Major comment 6____

      Throughout this manuscript, the standard the authors used to interpret statistical significance is erratic. In a few instances, the threshold for p value is clearly indicated such as in Fig 1 legend. Though other times, much higher p values are considered differences. Here are some examples:

      SF 1C, p=0.1167 is considered "(mkp5) shorter than wildtype ciliary lengths" (also line 177 "SF 1C" instead of "SF 1D")

      Fig 3C, p=0.083 interpreted as "slightly less" in line 262 and possibly as "(KAP-GFP) not being able to enter (cilia)" in line 268

      Fig 3G, p=0.1087 is considered "not decrease after two hours" line 267

      SF 3C, p=0.2929 for mkp2 mutant (misuse of "orthologs" in line 352) is considered "fewer actin puncta compared to wild type cells" (line 352).

      SF 6B, p=0.1565for mkp3 mutant (line 421: misuse of "orthologs" and correct use of "ortholog mutants") is considered not be able to "fully reorganize their microtubules" (line 421).

      These instances sometimes serve as basis for major conclusions and should be clarified or more carefully characterized.

      We agree the interpretations are very erratic in places and greatly appreciate this detailed list making it easy to find and correct these interpretations. We have adjusted the text in the mentioned places to reflect these changes, and we have made a statement in the text and under statistical methods that say we consider p Reviewer 2:

      In multiple instances the conclusions are overstated, and the author must clarify the interpretation of the results to reflect the data presented. Here are some examples:

      • ____a.____ The conclusion that protein synthesis is disrupted is incorrect in two instances (line 258 and 275) as the experiments in figure 3 do not directly examine changes in synthesis (they look at cilia regeneration as a proxy). We show that KAP-GFP expression is not normal during regeneration at 120 minutes which suggests, in addition to the inability for cilia to grow in BCI, that synthesis is inhibited because this protein is not replaced. In addition, blocking the proteosome did not rescue this decrease in KAP-GFP expression indicating that this is not a matter of KAP-GFP protein being degraded rapidly. We use regeneration and KAP-GFP readout as a proxy for protein synthesis. We have clarified this in the text.

      • ____b.____ The conclusion that BCI disrupts membrane trafficking is too broad when the authors only examined trafficking of one membrane protein, Arl6. While we only looked at one membrane protein specifically, we assess other membrane trafficking paths. We looked at BCI vs. BFA to assess Golgi trafficking (Dentler 2010) in addition to formation of actin puncta which is used in Bigge et al. 2020 as an assay for membrane uptake from the plasma membrane for incorporation into cilia.

      • ____c.____ The conclusion that the transition zone is disrupted is too broad based on a decrease in the expression of one transition zone protein, NPHP4. We have changed the text to be more specific to NPHP4.

      Highlighting the overstatement, the conclusion of the header and figure caption on page 10 contradict one another. The manuscript states that "BCI partially disrupts the transition zone" (line 313) and that "The TZ structure is structurally unaltered with BCI treatment" (line 329).

      In the manuscript, we show that the EM-visible structure is indeed unaltered. Because we see a decrease in NPHP4 fluorescence, we concluded that while the EM-visible structure is unaltered, protein composition within the transition zone is altered which suggests that BCI partially disrupts the transition zone.

      Why is kinesin-2 the only target studied for ciliogenesis? Ciliogenesis is a complex process that involves many other critical proteins and investigating kinesin-2 alone is not sufficient to conclude why BCI prevents cilia assembly.

      We use kinesin-2 because it is the only ciliary anterograde motor in Chlamydomonas which is required for proper ciliogenesis. By assessing kinesin-2, we were able to address whether this protein alone was the cause for inhibited ciliary assembly (and we find that it’s not), whether its ability to enter was impacted (likely owing to defects in other protein entry), and we were able to use this protein to understand how its protein expression was affected. Because KAP-GFP is a cargo adaptor protein and interacts with IFT complexes and other cargoes, defects in this protein can have a wide range of implications. We agree and the data agree that kinesin-2 alone is not sufficient to conclude why BCI prevents cilia assembly. Because of this, we assessed other pathways including membrane trafficking and microtubule stabilization to better understand why we see defects in ciliary assembly. Certainly many other proteins are important in ciliogenesis and we hope that this study sparks further work in this area to identify additional causative explanations for impaired ciliogenesis upon MAPK activation..

      Tagged ciliary proteins are sensitive to disruptions in function and expression within cilia. It is important to include proper controls in the study using KAP-GFP Chlamydomonas cells to ensure that KAP-GFP maintains endogenous expression levels and normal function as untagged KAP. Furthermore, if this information is available through the resource where the cells were purchased, then this needs to be discussed.

      KAP-GFP expressing Chlamydomonas has previously been validated as described in Mueller et al., 2005. We will provide details in the text about validation of this strain.

      The authors need to provide clear explanations to a general audience of why this technique is used and how the authors reached the interpretations. There are several instances where the authors use techniques that are cited as fundamental papers in Chlamydomonas. Here are two examples:

      • ____a.____ It is unclear how the authors concluded that decreased frequency and velocity of train size shows that kinesin entry, specifically, is disrupted. We have expanded on this in the text. Please see response to reviewer 1, Major comment 2 above.

      • ____b.____ It was impossible to follow how the experiment where cells treated with cycloheximide could not regenerate their cilia following BCI treatment shows that BCI inhibits protein synthesis. We have adapted the text to be more clear regarding this experiment. In this experiment, we deplete the ciliary protein pool by forcing ciliary shedding two times. Following the first shedding, there is enough protein to assemble cilia to half length (Rosenbaum, 1969). We ensure that the protein pool is completely used up by inhibiting further ciliary protein synthesis with cycloheximide. For the second shedding event, completely new ciliary protein must be synthesized for ciliogenesis to occur which is why ciliogenesis takes much longer compared to a single regeneration where half of the ciliary protein pool still remains and can be immediately incorporated into cilia (SF 2C). In the presence of BCI, cilia cannot grow at all as expected; but 4 hours after BCI is washed out, we see ciliogenesis just beginning to occur which indicates that there is protein present for ciliogenesis to begin whereas in cells where BCI is not washed out, we do not see any ciliogenesis.

      The impact of BCI treatment on membrane trafficking as presented is confusing. BCI exacerbated the effects of BFA treatment on Golgi, yet the authors do not address that this could be an indirect effect of BCI or an off-target effect of BCI.

      This is addressed in the discussion (paragraph 4).

      The discussion section includes many interpretations of the results, but leaves the reader confused as to what the authors think might be happening. The manuscript would be far clearer if the authors would provide a working model for why BCI impacts cilia length. It is fine for this to be left for future work but, as the experts, the authors must have relevant thoughts to share with the field.

      Figure 7 provides a model with as much as we can conclude given the data; what we show is that BCI inhibits many different processes in the cell, but we do not necessarily show links between these processes to provide a complete working model of how these are all interconnected; we have provided a summary model that depicts the various, still disconnected processes that are inhibited by BCI. MAP kinases such as ERK have dozens of downstream targets both within and outside the nucleus. Ciliogenesis also is a complex process coordinating many cellular mechanisms. The intersection of these two seem to have a multi-fold effect that results in a dramatic ciliary phenotype through a combination of factors, however not one that fully explains the severity upon initial deciliation in BCI/MAPK activation. Further work is needed to identify the precise cause of completely inhibited cilium growth from zero length.

      MINOR ISSUES

      1. The title of the manuscript is inaccurate and overstates the pathway involvement in cilia. The authors do not directly show that ERK pathway activation causes the ciliary phenotypes due to the use of BCI, a drug that modulates ERK. We have adjusted the title to “The ERK activator, BCI, causes…”

      When discussing results of data that are not statistically significant it creates confusion to state that the results "increased/decreased slightly".

      We agree that references to statistics are inconsistent or confusing throughout the text and have adjusted these references accordingly.

      Reviewer 3:

      Major comment:

      - If the authors want to emphasize their finding is associated with MAP kinases, it would be also beneficial to examine other major MAP kinase pathways such as P38/JNK. If not, then this reviewer suggests revising the text as ERK through this manuscript to avoid confusions.

      Because the ERK pathway has not been fully elucidated in Chlamydomonas, we have refrained from using “ERK” as a descriptor because this particular MAPK shares equal identity with multiple MAPKs in Chlamydomonas. Further, BCI may be targeting more than one MAPK phosphatase resulting in the myriad phenotypes we have discovered. At this time, we lack a level of gene-level resolution to map to known MAPK pathways.

      • *

      4. Description of analyses that authors prefer not to carry out

      Please include a point-by-point response explaining why some of the requested data or additional analyses might not be necessary or cannot be provided within the scope of a revision. This can be due to time or resource limitations or in case of disagreement about the necessity of such additional data given the scope of the study. Please leave empty if not applicable.


      Reviewer 1:

      Major comment 2____

      The claim that "BCI treatment decreases kinesin-2 entry into cilia" (line 236) is a misinterpretation of the data presented. The data indicates KAP-GFP have reduced accumulation in cilia, decreased IFT (anterograde) frequency, velocity and injection size associated with BCI treatment. Though as shown in Fig 1D and Fig 2C, cilia length is also shorter due to BCI treatment. Ludington et. al, 2013 showed a negative correlation of cilia length and KAP injection rate in various treatments that affect cilia length. It's essential to rule out that the KAP dynamics reported in the current manuscript is not an outcome of shortened cilia in order to claim as line 236 seems to suggest. One way to demonstrate specific effect by BCI would be to compare KAP dynamic in cilia with equal or similar length, either by only selecting the shorter cilia from wt or use other treatments that are known to decrease cilia length (chemicals, cell cycle, mutants etc.). Given the capability and resource represented in this manuscript, I don't expect a significant cost and time investment for these experiments.

      Ludington et al., 2013 shows that injection size decreases with increasing length. Our data show that the shorter length cilia have decreased injection size and rate inconsistent with the cause being due to shortened length alone. In other words, in figure 2C and 2G, we see decreased KAP-GFP fluorescence in shorter cilia as opposed to greater fluorescent signal in shorter cilia seen in Ludington et al., 2013. This data, in combination with the decreasing frequency of KAP-GFP entry overtime in figure 2E and decreased velocity in figure 2F support decreased kinesin-2 entry into cilia. If entry was unaltered, we would expect increased KAP-GFP fluorescence in the cilia over time in BCI-treated cells.


      Reviewer 2:

      The authors state that the decreased length of cilia following BCI treatment could be a result of reduced assembly or increased assembly. Disruptions to cilia assembly and disassembly are not mutually exclusive and both must be evaluated. The authors do not test whether cilia disassembly is disrupted in BCI treatment and therefore, cannot conclude that BCI solely disrupts cilia assembly.

      While effects on disassembly remains a possibility, the striking inability to increase from zero length upon deciliation and the effects on anterograde IFT through the TIRFM assays suggest an affect on assembly. There may be effects on disassembly and likely many other cilia related processes not investigated but we feel it remains accurate to conclude that assembly is affected by BCI treatment.

      Reviewer 3:

      - If time allows, in addition to examining NPHP4, it would be beneficial to examine other TZ/TF markers such as CEP164 to confirm if BCI partially disrupts the TZ.

      Given the known outcomes of NPHP4 loss in Chlamydomonas (Awata et al., …) in affecting ciliary protein composition, we suspect the changes in NPHP4 abundance at the transition zone will have a significant impact and agree it would be interesting in a follow up study to see how other transition zone proteins (particularly ones known to interact with NPHP4 or others critical for TZ function) are impacted following BCI treatment.


      MINOR COMMENTS:

      - I suggest moving supplemental figure 1 to the main figure (Fig. 1?) so that the readers appreciate the author's careful examination of BCI through this manuscript.

      Thank you for your suggestion and kind critique. We have included this data in the supplement for consistency with mutant data in all of the other supplemental figures.


    1. Create a customer service vision Selecting transcript lines in this section will navigate to timestamp in the video - We all face obstacles that can make it hard to focus on delighting our customers. Think about the challenges you face in your own daily work. It might be angry customers, difficult coworkers, bad policies, defective products or even personal problems. We're not supposed to let these things get to us, but it's not easy. The most important thing you can do to overcome these challenges is to create what's called a personal customer service vision. This is a statement that describes the way you want your customers to feel when you serve them. It can act like a compass to point you in the right direction whenever you face a challenging situation. Let's say you work in a college financial aid office. Your job is helping students apply for financial aid. How would you want the students you serve to feel? Your personal vision statement might be I want to help students achieve their educational dreams. A powerful statement like this might remind you to go beyond just processing financial aid paperwork. You might even take an extra moment to help a confused student or suggest alternative options when the specific financial aid their requesting is not available. We can't make every customer happy, but a personal service vision can inspire us to try. Here's a powerful visualization exercise to help you create your own vision and bring it to life. I recommend you download the personal vision worksheet to help you out. Start by imagining a customer you helped. How would you like them to feel about your service? Perhaps your company or team has an overarching customer service vision that can guide you, or you can just rely on your own personal service values. Next, write a thank you letter to yourself from that imaginary customer. Be sure to describe what you did and how it made them feel. Here's an example that I wrote. Dear Jeff, thank you for being our trusted partner. Your commitment to helping us achieve our goals is the reason you are the first and only phone call when we need help improving customer service. Thank you. Finally, read your thank you letter at the start of each day for three weeks and try to receive this feedback from a real customer. The feedback might not be a letter. It could be verbal feedback, an email or even a comment in a customer service survey. People are often amazed when they receive feedback from a customer that nearly matches their thank you letter word for word. This exercise is effective, because it helps you visualize the type of service you'd like to provide. This visualization can help you stay focused on providing outstanding service throughout each and every day.

      customer service

    1. Overview Q&A Notebook Transcript INSTRUCTOR Jeff Toister Author, Consultant, Trainer Follow on LinkedIn RELATED TO THIS COURSE Learning Groups Show all Exercise Files (2) Show all Certificates Show all Continuing Education Units Show more Exam Start Exam Course details 1h 22m Beginner Updated: 11/18/2020 4.7 (12,712) View Jeff's LinkedIn NewsletterDo your customers feel valued? When they do, they keep coming back. When they don't, your business suffers. In this course, writer and customer service consultant Jeff Toister teaches you the three crucial skill sets needed to deliver outstanding customer service and increase customer loyalty. Learn how to build winning relationships, provide the right assistance at the right times, and effectively handle angry customers. He also shares ways to find out what your customers really think about your service, and use their feedback to improve. Learning objectives Explore how you can use customer surveys to build rapport. Name three ways you can use active listening to serve your customers more effectively. Identify the different types of needs that must be addressed in order to solve problems. Explain the benefits of taking ownership of a problem. Define “preemptive acknowledgment” and recognize its impact on customer service. List three types of attitude anchors and explain their differences. Skills covered Customer Loyalty Customer Service Learners 24,449 members like this content 537,649 people started learning CEU - Continuing Education Units (2 certifications available) National Association of State Boards of Accountancy (NASBA) Continuing Professional Education Credit (CPE): 3 Recommended NASBA Field of Study: Communications and Marketing Sponsor Identification number: 140940 To earn CPE credits the learner is expected to: Complete all videos and chapter quizzes Complete the final exam within one year from completing the course Score 70% or higher on final exam Glossary: see PDF file in the Exercise Files area Program Level: Basic Prerequisite Education: There are no prerequisites for this course. Advanced Preparation: There is no advance preparation required for this course. If you undertake this course for CPE credits, you can leave final comments in the Self Study Course Evaluation. LinkedIn Learning is registered with the National Association of State Boards of Accountancy (NASBA) as a sponsor of continuing professional education on the National Registry of CPE Sponsors. State boards of accountancy have final authority on the acceptance of individual courses for CPE credit. Complaints regarding registered sponsors may be submitted to the National Registry of CPE Sponsors through its web site: www.nasbaregistry.org Register here with LinkedIn Learning. For course refund policy, issue resolution, and additional info please see the LinkedIn User Agreement. For more information regarding administrative policies such as complaint and refund, please contact our offices at +1 650-687-3600. Project Management Institute (PMI)® PDUs/ContactHours: 1.75 LinkedIn Learning has been reviewed and approved by the PMI® Authorized Training Partner Program. This course qualifies for professional development units (PDUs). The PMI Authorized Training Partner logo is a registered mark of the Project Management Institute, Inc. To view the activity and PDU details for this course, click here. Related courses POPULAR 32m COURSE Course Customer Service: Problem Solving and Troubleshooting 293,029 learners Save POPULAR 27m COURSE Course Building Rapport with Customers 238,646 learners Save POPULAR 49m COURSE Course De-Escalating Conversations for Customer Service 278,035 learners Save POPULAR 23m COURSE Course Customer Service: Call Control Strategies 188,760 learners Save POPULAR 33m COURSE Course Creating Positive Conversations with Challenging Customers (2019) 275,662 learners Save Learner reviews 4.7 out of 5 12,712 ratings How are ratings calculated? 5 star Current value: 9,973 78% 4 star Current value: 2,159 17% 3 star Current value: 444 3% 2 star Current value: 44 <1% 1 star Current value: 92 <1% Olatunji Awesu 3rd Sales Team Lead July 25, 2022 Great course Helpful Report Ayanda Hlatshwayo Call Center Representative July 25, 2022 ... Helpful Report thobani mkhize agent July 25, 2022 very helpful Helpful Report Show more reviews Live office hours with experts Show all Show all upcoming events Jun 16, 10:00 AM EVENT Event Motivating customer service employees By: Jeff Toister Ask here to share with learners, experts and others Ask Looking for technical assistance (e.g. downloading certificates)? Visit Learning Help Question asked by Tye Locke Tye Locke Willing to help but are you? 5d More options for this question Copy link to question Report this post Where can I download the worksheet? From the video: Define outstanding customer service (00:38) 4 Answers Like Answer Add your answer here Add your answer here Answered by sadam arab sadam arab Student at alpha university 10h More options for this answer Report this post also I want download so how I can download Like Reply Answered by Sydney Sabelo Sydney Sabelo Risk Controller at Robor 1d More options for this answer Report this post PDF  is the best or recommended to download your worksheet with Like Reply Load more answers Question asked by Kufre Edet Kufre Edet Information Technology Specialist at Akwa Ibom State Agency for the Control of AIDS 1w More options for this question Copy link to question Report this post I cant find where to download the PDF files recommended in the course From the video: Create a plan (02:02) 2 Likes 1 Answer Like Answer Add your answer here Add your answer here Answered by Jeff Toister Jeff Toister Instructor Your service culture guide. 1w More options for this answer Report this post Hi Kufre. The exercise files are available to LinkedIn Learning subscribers. To download the files, navigate to the "Overview" tab and look for a link marked "exercise files" near the top. I'd recommend contacting LinkedIn Learning directly for technical assistance if you run into any more difficulty: www.linkedin.com/help/learning Like Reply Question asked by Sandip Kaur Badhesha Sandip Kaur Badhesha Passionate IT Analyst Looking for a Challenging Opportunity 1w More options for this question Copy link to question Report this post I can't find all the documents he suggests to Download in each Video. From the video: Implement techniques to build rapport (00:22) 1 Like 1 Answer Like Answer Add your answer here Add your answer here Answered by Jeff Toister Jeff Toister Instructor Your service culture guide. 1w More options for this answer Report this post Hi Sandip, The exercise files are available to LinkedIn Learning subscribers. They can be accessed by navigating to the course's Overview tab. Look for a link labeled "exercise files" near the top. I'd recommend contacting LinkedIn Learning directly for technical support if you run into any difficulties: www.linkedin.com/help/learning  -Jeff Like Reply Question asked by Lucas M. Ladeveze Lucas M. Ladeveze Surgeon Specialized Knee-Foot and Ankle -Specialized Sports Medicine - Profesional Football Coach - Professional Padel Coach - 2w More options for this question Copy link to question Report this post LEarning a lot! But I cannot find all the documents he suggests to Download in each Video.   From the video: Implement techniques to build rapport (00:23) 3 Answers Like Answer Add your answer here Add your answer here Answered by Jeff Toister Jeff Toister Instructor Your service culture guide. 1w More options for this answer Report this post Hi Lucas, I'm glad you're learning a lot so far! The exercise files are available to LinkedIn Learning subscribers. They can be accessed by navigating to the course's Overview tab. Look for a link labeled "exercise files" near the top. I'd recommend contacting LinkedIn Learning directly for technical support if you run into any difficulties: www.linkedin.com/help/learning  -Jeff Like Reply 1 Like Answered by Maha M. Maha M. Entrepreneurial professional with growth mindset, excellent interpersonal skills, problem-solving abilities. Successful at team-leading & building ,showcasing strong emotional intelligence & full filling business needs. 1w More options for this answer Report this post good content Like Reply 1 Like Load more answers Question asked by Marlene Ranallo Seelig Marlene Ranallo Seelig Recruiter 2w More options for this question Copy link to question Report this post Where are these downloads?  From the video: Implement techniques to build rapport (00:20) 1 Like 1 Answer Like Answer Add your answer here Add your answer here Answered by Jeff Toister Jeff Toister Instructor Your service culture guide. 2w More options for this answer Report this post Hi Marlene. The exercise files are available to LinkedIn Learning subscribers. They can be accessed by navigating to the course's Overview tab. Look for a link labeled "exercise files" near the top. I'd recommend contacting LinkedIn Learning directly for technical support if you run into any difficulties: www.linkedin.com/help/learning -Jeff Like Reply 1 Like Question asked by Charisa Chinyere Ndinojuo Charisa Chinyere Ndinojuo I am a professional freelancer in customer support, social media marketing, virtual assistant and data entry 1mo More options for this question Copy link to question Report this post I am done with watching all the video in this course and I still can't download the certificate, why? From the video: Keep your customers happy (00:28) 6 Likes 4 Answers Like Answer Add your answer here Add your answer here Answered by Ekemini Eyoh Ekemini Eyoh -- 3w More options for this answer Report this post I am not able to download the questions or try out the quizzes. Please how do I go about it,? Like Reply Answered by Quach T Dung Quach T Dung -- 4w More options for this answer Report this post Me too, I'm trying a lot but I can not get certificate Like Reply Load more answers Question asked by Patience Chekwube Patience Chekwube General virtual Assistant/ Data entry specialist/ lead generator 1mo More options for this question Copy link to question Report this post Please how do I download the learning plan worksheet.  Thank you From the video: What to know before watching this course (01:22) 1 Like 1 Answer Like Answer Add your answer here Add your answer here Answered by Patience Chekwube Patience Chekwube General virtual Assistant/ Data entry specialist/ lead generator 1mo More options for this answer Report this post Ok, I saw similar questions here and the answer to it. Have downloaded it but can't seem to open the downloaded file. What should I do Like Reply 1 Reply Commented by Jeff Toister Jeff Toister Instructor Your service culture guide. 1mo More options for this comment Report this post Hi Patience Chekwube . I'd recommend contacting LinkedIn Learning for technical support. www.linkedin.com/help/learning Like Reply Question asked by Eze Joy Eze Joy Student at Nnamdi Azikiwe University 1mo More options for this question Copy link to question Report this post Hello I have completed my course with a total of 73%in my exam but was not issued any certificate what will I do? From the video: Identify emotional needs (00:54) 1 Answer Like Answer Add your answer here Add your answer here Answered by Jeff Toister Jeff Toister Instructor Your service culture guide. 1mo More options for this answer Report this post Thanks for completing the course, Eze Joy . I hope it was very valuable to you! Here's a guide I found on the LinkedIn site for getting your certificate. It includes some troubleshooting steps. https://www.linkedin.com/help/learning/answer/a700836 Like Reply Question asked by Manar Fakhri Manar Fakhri MSc Master degree in Business Administration with Specialisation in International Marketing ( SMART CITY ) 1mo More options for this question Copy link to question Report this post I complete course and did the assessment and got 75% but no certification got !!!!!!!!!! From the video: Create a plan (00:01) 3 Likes 5 Answers Like Answer Add your answer here Add your answer here Answered by Esther Mutisya Esther Mutisya Operations Manager at Greenvale Hotel 1mo More options for this answer Report this post how do i download the pdfs? Like Reply 1 Reply Commented by Jeff Toister Jeff Toister Instructor Your service culture guide. 1mo More options for this comment Report this post Hi Esther. LinkedIn Learning subscribers can access the course worksheets by navigating to the Overview tab. There's a link near the top marked Exercise Files. Like Reply 1 Like Answered by Jeff Toister Jeff Toister Instructor Your service culture guide. 1mo More options for this answer Report this post Hi Manar. Thanks for completing the course! I found this guide on the LinkedIn Learning site with some troubleshooting steps for downloading certificates of completion: https://www.linkedin.com/help/learning/answer/a700836 If those steps don't help, I recommend contacting LinkedIn Learning directly for technical support: https://www.linkedin.com/help/learning While I don't work for LinkedIn Learning, and my technical skills are limited, I'd be happy to answer any questions you have about the course itself. -Jeff Like Reply 1 Like 3 Replies Load previous replies Commented by Jeff Toister Jeff Toister Instructor Your service culture guide. 32m More options for this comment Report this post Janh Delantar Here's what I shared with Manar. Hopefully, this will help you: I found this guide on the LinkedIn Learning site with some troubleshooting steps for downloading certificates of completion: https://www.linkedin.com/help/learning/answer/a700836 If those steps don't help, I recommend contacting LinkedIn Learning directly for technical support: https://www.linkedin.com/help/learning While I don't work for LinkedIn Learning, and my technical skills are limited, I'd be happy to answer any questions you have about the course itself. -Jeff Like Reply Commented by Janh Delantar Janh Delantar -- 1d More options for this comment Report this post How i can get my certificate i finish the course Like Reply Load more answers Question asked by Kingsley Chinemerem Kingsley Chinemerem Customer Relationship Officer at Sendme.ng 2mo More options for this question Copy link to question Report this post I'm not able to take the first lesson in the path. what could be the problem? From the video: Keep your customers happy 2 Likes 3 Answers Like Answer Add your answer here Add your answer here Answered by Dishita Peketi Dishita Peketi Customer Success Account Manager ( Sales Service Operations) CRM! 1mo More options for this answer Report this post Hello sir I am dishita I  couldn't able to open the exercise file which I  downloaded. Like Reply 2 Replies Commented by Bulelani lunathi Bulelani lunathi Student at Afedilem 1mo More options for this comment Report this post In other to be able to open your exercise file,i think you should go back to google out about how to open that type of file so that they will show you steps of opening the file you about to open. Like Reply Commented by Jeff Toister Jeff Toister Instructor Your service culture guide. 1mo More options for this comment Report this post Hi Dishita. I'd suggest contacting LinkedIn Learning's support team directly for technical assistance. These forums are focused on content-related questions, so your question might not get as fast and thorough a response as if you contacted support: www.linkedin.com/help/learning Like Reply 1 Like Answered by Sphamandla Hopewell Mchunu Sphamandla Hopewell Mchunu Cisco Network Academy IT. Computer Literacy. NACCW (Child and Youth Care).Department of Education (Learn Support Agent). Department of Health (TB screener and Lay counseling) Department 1mo More options for this answer Report this post Hi I have managed to finish all the quiz and exam but I cant access the certificate please help Like Reply 1 Reply Commented by Jeff Toister Jeff Toister Instructor Your service culture guide. 1mo More options for this comment Report this post Hi Sphamandla. I'd suggest contacting LinkedIn Learning's support team directly for technical assistance. These forums are focused on content-related questions, so your question might not get as fast and thorough a response as if you contacted support: www.linkedin.com/help/learning Like Reply Load more answers Show more Join the community of learners Project Management Institute (PMI) Prep - LI Learning Group 117,984 Members This group is for learners who are interested in Project Management Institute certification prep and want to connect, share, collaborate, learn, and teach in an open, safe environment. Learning is fun when done together. Let’s make it great and enjoy the conversation. *Note: By joining this group, your profile will be visible to other group members but your network will NOT be notified. Join National Association of State Boards of Accountancy (NASBA) - LinkedIn Learning Group 98,159 Members This group is for learners who are interested in NASBA and want to connect, share, collaborate, learn, and teach in an open, safe environment. Learning is fun when done together. Let’s make it great and enjoy the conversation. *Note: By joining this group, your profile will be visible to other group members but your network will NOT be notified. Join Graphic Design Tips & Tricks - LinkedIn Learning 30,908 Members This group is for learners who are interested in <topic> and want to connect, share, collaborate, learn, and teach in an open, safe environment. Learning is fun when done together. Let’s make it great and enjoy the conversation. *Note: By joining this group, your profile will be visible to other group members but your network will NOT be notified. Join Customer Service Skills & Management - LinkedIn Learning 17,488 Members This group is for learners who are interested in Customer Service Skills & Management and want to connect, share, collaborate, learn, and teach in an open, safe environment. Learning is fun when done together. Let’s make it great and enjoy the conversation. *Note: By joining this group, your profile will be visible to other group members but your network will NOT be notified. Join Show all Learning Groups 0 Notes taken Press Enter to save No notes saved yet Take notes to remember what you learned! Export your notes Get your notes for this course which includes description, chapters, and timestamps Download Filter results by video selected In this video Determine the value of outstanding customer service Selecting transcript lines in this section will navigate to timestamp in the video - When people think about outstanding customer service, there's often an employee who goes above and beyond to be the hero. Think about an experience where you received outstanding customer service. There's a good chance that an individual employee went above and beyond to make it happen. Have you ever wondered why they gave that extra effort? People go above and beyond, because they get something out of it. Even if it's just the satisfaction of knowing they made a difference. Let's explore some of the ways you, your coworkers and even your organization might benefit when you make the effort to provide outstanding customer service. You can download the value of outstanding service worksheet to help you, or just jot down some notes on a blank piece of paper. A good place to start is to look at how you personally benefit from providing your customers with service that exceeds their expectations. Make a list of what you gain from putting in that extra effort. It may help to think about a specific situation where you went out of your way to delight a customer. Here's some examples that might be on your list. Happy customers are easier to serve. You enjoy helping people, and you feel a sense of accomplishment when you are able to help someone else solve a problem. We can also have a positive impact on our coworkers when we personally provide outstanding service. Try making a list of ways your extra effort might benefit the people you work with. This time, it might be helpful to think about how you felt when one of your coworkers delivered outstanding service. Here's some examples that might be on that list. Your coworkers will have to fix fewer problems. Great service brings positive energy to the entire team, and you can be a positive role model to your colleagues. Customers often look at the people who serve them as representatives of the entire organization. As a third step in this exercise, make a list of benefits your organization receives when you personally provide outstanding customer service. Here are a few examples that might be on that list. Increased profits, retained customers, and positive word of mouth from customers who refer your organization to others. Hopefully this exercise helped you identify some reasons that providing outstanding service is important to you. Whenever you have a tough day, reread the list you've just created and reflect on why you worked so hard to help your customers. Customer service isn't always easy. But the important thing to remember is that you can choose to give that extra effort to be outstanding.

      customer service

    1. Define outstanding customer service Selecting transcript lines in this section will navigate to timestamp in the video - I often get very different answers when I ask people to define outstanding customer service. That's why I've come up with a universal definition that can be applied to any situation. Outstanding customer service is service that exceeds your customer's expectations. To help explain this a bit more, it's helpful to look at the differences between good, poor, and outstanding service. Good service occurs when a customer's expectations are met. For example, if your customer expects you to be friendly, and you're friendly, then you'll have provided good service. The challenge with good service is it's not very memorable. Let me give you an example. Imagine you walk into a room and turn on the lights. You probably won't give it a second thought when the lights come on. That's like good service. It's fine. It's what happens most of the time, but it's not very memorable. Poor service occurs when the experience is worse than the customer expected, such as being rude when a customer expects you to be friendly. Unlike good service, poor service is memorable because we tend to remember things that are different than what we expect. You'd definitely notice if you walked into a room, turn on the light switch and the lights did not come on. Outstanding customer service is service that exceeds your customer's expectations. So if your customer expects you to be friendly, you might find a way to go beyond that by making your service more personal. You could try using their name, engaging in a little light conversation, or offering a genuine and sincere compliment. Now one challenge is customers often have different expectations. So customers could have the same experience and still rate it differently. Imagine an online clothing store has a mix-up in the warehouse and accidentally ships the wrong color item to three different customers. The customer service rep handles each call the same way. - Oh, I'm sorry to hear that. Well, I'm going to help you do an exchange and get you the item you ordered. - Now notice how each customer reacts to the same service. - An exchange! I was supposed to use this as a gift tonight. I can't use this (stutters). - Well, an exchange would be great. I don't know if there was a problem with shipping or if I just ordered the wrong color, so I really appreciate your help. - No, this is awesome. This is awesome. I really like it. I am keeping this one. I also want the color I ordered, though. - Okay how do you think each customer felt about the service they received? Keep in mind, every customer has their own unique perspective. The first customer felt he received poor service because an exchange wouldn't solve his real problem, which was giving a gift that night. The second customer felt she received good service because she was just happy to get the issue corrected. The third customer felt he received outstanding service because he unexpectedly received an additional item he liked. One of the unique challenges of customer service is your customers decide how they feel. Sometimes they feel great when you don't do anything special. Other times, they're angry, even after you try your hardest. My suggestion is to treat each customer as an individual and try to understand their own unique needs. Here's an exercise that can help. Think about the last three customers you served. How do you think they felt about your service? Did you meet their expectations? Were they disappointed in some way? Or did you manage to go beyond what they expected? Thinking about service from your customer's perspective can help you identify more ways to deliver outstanding customer service.

      customer service

    1. “What is really frustrating about it is that [Hickel] publicly gives the impression we don’t really know, which given all I said … and economic historians said in hundreds of great research publications is absolutely not true,” he wrote to me. “That’s not fair to us I feel, it is not fair at all to the research that is out there, it just really misinforms the readers. The Guardian article that says ‘it’s all wrong’ will stick.”

      This is an unfortunate trend I've noticed in economic discussions: people make critiques of economic research, not realizing (or perhaps, purposefully ignoring) that the research itself has already acknowledged those critiques.

      Scott Alexander talks about this in the beginning of his piece Yes, We Have Noticed the Skulls.

    1. https://niklas-luhmann-archiv.de/bestand/zettelkasten/zettel/ZK_2_SW1_001_V

      One may notice that Niklas Luhmann's index within his zettelkasten is fantastically sparce. By this we might look at the index entry for "system" which links to only one card. For someone who spent a large portion of his life researching systems theory, this may seem fantastically bizarre.

      However, it's not as as odd as one may think given the structure of his particular zettelkasten. The single reference gives an initial foothold into his slip box where shuffling through cards beyond that idea will reveal a number of cards closely related to the topic which subsequently follow it. Regular use and work with the system would have allowed Luhmann better memory with respect to its contents and the searching through threads of thought would have potentially sparked new ideas and threads. Thus he didn't need to spend the time and effort to highly index each individual card, he just needed a starting place and could follow the links from there. This tends to minimize the indexing work he needed to do regularly, but simultaneously makes it harder for the modern person who may wish to read or consult those notes.

      Some of the difference here is the idea of top-down versus bottom-up construction. While thousands of his cards may have been tagged as "systems" or "systems theory", over time and with increased scale they would have become nearly useless as a construct. Instead, one may consider increasing levels of sub-topics, but these too may be generally useless with respect to (manual) search, so the better option is to only look at the smallest level of link (and/or their titles) which is only likely to link to 3-4 other locations outside of the card just before it. This greater specificity scales better over time on the part of the individual user who is broadly familiar with the system.


      Alternatively, for those in shared digital spaces who may maintain public facing (potentially shared) notes (zettelkasten), such sparse indices may not be as functional for the readers of such notes. New readers entering such material generally without context, will feel lost or befuddled that they may need to read hundreds of cards to find and explore the sorts of ideas they're actively looking for. In these cases, more extensive indices, digital search, and improved user interfaces may be required to help new readers find their way into the corpus of another's notes.


      Another related idea to that of digital, public, shared notes, is shared taxonomies. What sorts of word or words would one want to search for broadly to find the appropriate places? Certainly widely used systems like the Dewey Decimal System or the Universal Decimal Classification may be helpful for broadly crosslinking across systems, but this will take an additional level of work on the individual publishers.

      Is or isn't it worthwhile to do this in practice? Is this make-work? Perhaps not in analog spaces, but what about the affordances in digital spaces which are generally more easily searched as a corpus.


      As an experiment, attempt to explore Luhmann's Zettelkasten via an entryway into the index. Compare and contrast this with Andy Matuschak's notes which have some clever cross linking UI at the bottoms of the notes, but which are missing simple search functionality and have no tagging/indexing at all. Similarly look at W. Ross Ashby's system (both analog and digitized) and explore the different affordances of these two which are separately designed structures---the analog by Ashby himself, but the digital one by an institution after his death.

    1. Review coordinated via ASAPbio’s crowd preprint review

      This review reflects comments and contributions by Sónia Gomes Pereira, Rachel Lau, Sam Lord, Sanjeev Sharma, Parijat Sil. Review synthesized by Richa Arya.

      General comments

      It may be helpful to elaborate on how it is established that CHIP mobility is dependent on activity. The conclusion in the paper has been primarily drawn from the catalytically inactive H260Q mutant which is less mobile. However the fact that the puncta of the mutant are brighter and larger than the wild type and that it recovers slowly also indicates the protein might be inherently more prone to aggregation upon heat shock.

      Related to the above point, under conditions such as VER treatment and Act-D treatment, the nucleolar recruitment is unaltered but recovery is affected (which implies mobility may be affected). This leads to the accumulation of CHIP in the nucleus. In these scenarios, it may be relevant to report on the status of wild type CHIP activity? Conducting the ubiquitination assay as in Figure 5A with Act-D and Ver treatment would be informative. If no difference in ubiquitination is observed, it can be concluded that it is not the change in CHIP mobility that affects its activity, but rather it's activity that promotes CHIP mobility/dynamics (the conclusion from Figure 5).

      o Figure 1: The question arises as to why the control and recovery show puncta in panel C, but not the HS condition. Also, to make it easier to appreciate the nucleolar localization of CHIP in the HS condition, zoomed in regions and overlay images would be useful.

      Figure 1b: To support interpretation of the results, it would be helpful to highlight some examples of the nucleolar localization of CHIP. Additionally, it looks like there are specific dots (that could be like condensates) in the Control and Recovered, but not during the Heat Shock cells, not in panel B. Maybe some quantification such as number of dots per cell/ intensity/size could accompany the images. Similar parameters of the condensate structures in the nuclei in the transiently transfected cells could be quantified.

      Figure 1: Quantifications such as 2B and 2C could also be done for Figure 1, for both Hsp70 and CHIP.

      Figure 1E.K30A mutant exhibited impaired CHIP migration to nucleoli after heat shock (Fig. 1E)…’ How strong is this impairment? Could it be quantified either by fluorescence intensity or via Western blot of the different cellular fractions.

      o Figure 2: It would be helpful to have additional clarification on what the different parameters such as -"% of cells with EGFP-CHIP in the nucleolus' or 'CHIP intensity in the nucleolus' represent, as well as clarification on the transition from measuring CHIP nucleolar-to-nucleus intensity ratios for immunostaining (as in Fig S1E) to measuring just nucleolar CHIP intensities in the main Figure for the EGFP-CHIP overexpression experiments. Perhaps a western blot showing HSP70 expression with VER might be helpful in demonstrating that total protein expression is not affected and that it is only its activity being affected.

      a small molecule inhibitor of HSP70…’ Some suggestions alongside the loss of function assays such as knockdown and inhibitor treatment:

      What happens to Hsp70 and thereby CHIP translocation to the nucleus in cells with high, medium versus low levels of HSP70 expression? Do the high-expressing cells show more enhanced CHIP recruitment to the nucleolus? Can it be quantified as to how correlated the efficiency of recruitment of CHIP is to the expression level of Hsp70? How does the nucleolar translocation of Hsp70 itself correlate with its expression level?

      Figure 2a: It is clear that the HSP70 co-localises with CHIP upon heat shock. Overlaid images might be better to highlight this but the use of green and red is not ideal for colour-blind readers. May be changed for bar graphs too (2d,e).

      Figure 2b,c: There is a question about the statement that mutant CHIP was unable to localise in the nucleoli due to lack of HSP70 binding in Fig 1E. In Fig 2B and 2C CHIP was able to migrate into the nucleoli (albeit at a lesser extent) with HSP70 knockdown? Maybe images corresponding to this experiment might help as well to allow the reader to see the difference in localisation? It is mentioned that CHIP auto-ubiquitination is important in its localisation in Fig 5 so does the CHIP K30A mutant necessarily verify that the lack of HSP70 binding is causing impaired migration to the nucleus in Fig 1E? Could K30A also affect its auto-ubiquitination? Suggest referencing supplementary figure 2 alongside Fig 2B and 2C, and changing the dots in this graph to red, to make it consistent with panel F.

      Figure 2d,e: Bar plots could be replaced with scatter plots showing individual data points as done in Supp. Fig 1E. Adding t1/2 values with FRAP traces would support the changes observed for recovery times across conditions. Calculating mobile fraction and reporting would also be helpful.

      Figure 2f,g: Suggest updating the figure legend to clearly distinguish both curves. Some additions may complement the FRAP analysis presented:

      • How does the FRAP mobility of CHIP compare between absence and presence of heat shock?
      • How does the FRAP mobility of CHIP compare in the recovery phase in presence and absence of heat Ver?
      • Is the CHIP mobility different in nucleolus versus nucleus?

      ‘and HSP70 inhibition did not si+gnificantly reduce its dynamics (Fig. 2F)…’ Would there be any change in CHIP dynamics in siHSP70 cells? It would be helpful to mention this following Fig 2B/C. Maybe use 'mobility' instead of dynamics, to be more specific.

      o Figure3: It will be helpful to include an overlay/merged image of the two channels, and to explain in the legend how the measured correlation coefficient is obtained. It would be nice to see what kind of sub-structures show the maximum colocalization.

      Fig 3c: HS+Rec condition should show a loss of correlation between CHIP and NPM1 and is an important control in this figure. Comparison with Fibrilarin is good, demonstrating a loss of correlation between the NPM1 and CHIP themselves under different conditions and data for Ctrl only conditions would also add value.

      o ‘it altered CHIP distribution, which more prominently overlapped with Act D-induced NPM1 ring formations (Fig. 4D)…’ Can this be quantified? Maybe it will show more pronounced colocalization compared to heatshock alone.

      o 'this observation suggests that proper nucleolar assembly may be necessary for CHIP dynamics'. It may be worth specifying the reference to Dynamics here:

      1. Mobility measured via FRAP
      2. Translocation efficiency done via intensity measurement or ration of nucleolus/nucleus intensity. FRAP measurement of CHIP may be helpful to conclude about the mobility of CHIP in the nucleolus upon heat shock, in presence or absence of Act D pre-treatment. A change in mobility may support the lack of translocation during the recovery phase in presence of Act D.

      o Figure 4: (a) It may be worth commenting on why the Hoechst staining looks different between the Control and the Act-D conditions. Fig4d: It could be helpful to add images of NPM1 localization in cells treated with Act D, but not under heat shock. In other words, are these NPM1 rings specific to the heat shock response? The size of the cells and the nucleus are different for HS versus Act-D+HS panels. If the scale bar is consistent and this is a normally observed morphological change upon Act-D treatment, it might be helpful to note this size difference in the legend.

      o ‘We found that the activity of CHIP is not indispensable for heat shock-induced migration to the nucleolus (Fig. 5B). However, FRAP analysis of the nucleolar CHIP H260Q mutant showed a decrease in its dynamics compared to CHIP WT…’ Maybe the fragment could be rewritten for clarity (e.g. is dispensable). What happens to the mutant CHIPH260Q localization upon recovery? Is it slower than wt? Is more mutant CHIP retained in the nucleolus upon recovery?

      o Figure 5: Suggest showing a wt image as comparison, in panel B. An alternate interpretation for the observations with H260Q mutant could be that the mutation leads to instability and misfolding of CHIP (as suggested in the paper) which leads to increased aggregation (larger and brighter droplets, low mobility) upon heat shock with itself and other interacting proteins. This interpretation does not need to invoke a loss of ubiquitination activity as a cause, it could be another consequence of misfolded CHIP.

      Figure 5c: How do the mobility of wild type CHIP compare with the H260Q mutant in the nucleus or in absence of heat shock? If the mobility is the same during pre-heat shock/pre-translocation to the nucleolus, the wild type and mutant protein have inherently similar dynamics. And if this gets altered only in the nucleolus of heat shocked cells, it would support the conclusion that it is the activity of CHIP that helps retain its mobility in the nucleolus and possibly prevent its aggregation in this compartment.

      Figure 5f: If there were two independent experiments, can both be represented? Or was the data pooled from the two experiments?? Suggest representing the data as two points for CHIP wild type and mutant each, from two independent experiments.

      Figure 5g,h,i: Dot plot overlay on the boxplot might be nice to see the spread of datapoints.

      o ‘Interestingly, sizeable intra-nucleolar CHIP droplet-like structures could be observed after overnight heat shock in cells expressing the CHIP H260Q mutant, outnumbering their WT protein counterparts (Fig. 5E-I)…’ In Figure 1C some bright foci are also observed in control and recovered cells. Are these similar to the "droplet-like structures" described here?

      o ‘These differences between CHIP WT and mutant assemblies may stem from the alterations in CHIP H260Q dynamics within the nucleolus (Fig. 5C and D)’. Similar measurement as in Fig 5C could be done upon overnight heatshock to support this statement.

      o ‘Surprisingly, we found comparable redistribution of all CHIP variants to nucleoli during heat shock, suggesting an…'. Is this a cell line-specific difference, or could it be due to differences in approach, i.e. stable cell line vs. transient overexpression? Similar transient expressions in HeLa may help clarify this.

      o Based on Fig S1E, it appears there might be both an HSP70 activity-dependent (smaller) and HSP70 activity-independent (larger) contributions to CHIP localization. VER treatment reduces CHIP relocalization to the nucleus by a small but significant amount both in control and HS-treated cells.

      o Cell transfection - Suggest reporting the confluency of the cells before transfection (or at which they were seeded).

      Methods - In Figs 3C and 5G-I, there is a concern about the statistical approach to calculate p-values based on multiple measurements (nuclei) within each sample. The t-test and ANOVA assume that each measurement is independent, and multiple nuclei within the same sample are not independent. Recommend to either not report p-values or to average together the values from each sample and calculate the p-value using those sample-level means. For more information, see https://doi.org/10.1371/journal.pbio.2005282 and https://doi.org/10.1083/jcb.202001064.

    1. The language is academic, which has contributed to the confusion around the topic, but it is clear enough that most developers should be able to understand it.

      This I disagree with. Even Fielding's "... must be hypertext driven" rant (which is great and in my bookmarks) is sabotaged by the curse of knowledge. If you know what REST is—and how most "REST" isn't REST (including the things that try to stand out as doing REST right and still just doing it wrong, but with nuance) then "REST APIs must[...]" makes sense. If you don't already get it, though, then it's nigh impenetrable—funnily enough, you need an a priori understanding of REST to be able to understand these attempts to explain REST and what Fielding is trying to communicate about REST not requiring *a priori" knowledge!

    1. "Maybe. I might just live out of the car for awhile." Ed snorts and it makes Stede smile. They both know he wouldn’t make it two days. "Maybe I'll turn it into a boat, take to the sea."This is a familiar conversation, one they used to have all the time, whispered late into the night. One that they haven't had as often since they've gotten older.Ed picks up the thread like they've never let it drop. "Why stop at a boat? Strap a rocket to it and leave the planet."

      I am obsessed with this little callback to their first conversation. Feels so bittersweet in that it's a symbol of their first ever phone call, but there's a sadness to the acknowledgement that everything is changing now that they're getting older. The space and the ocean are not the place of their wild childhood fantasies of curiosity and heroism. They become escape. An escape from a life that is beating the curiosity and heroism out of them. However, this is also so beautiful and lovely because it shows how this link between them, this bond and connection, has been there since their first ever phone conversation, and it will stay with them even as life throws misery and rejection at them. Lovely.

    1. Constantia lay like a statue, her hands by her sides, her feet just overlapping each other, the sheet up to her chin. She stared at the ceiling.

      It's interesting that Constantia is the one laying down like a corpse while it is the Colonel who is dead. Seems almost like she is in shock and her method of coping is to mimic the dead.

    2. the little creature

      I found it strange the way that they dehumanized the "little woman" that had invited Laura into the kitchen. Maybe it's a way of showing how the lower classes were often not even treated as humans, but some other creatures that were just meant to work and serve for the upper classes.

    3. “Stop the garden-party? My dear Laura, don’t be so absurd. Of course we can’t do anything of the kind. Nobody expects us to. Don’t be so extravagant.”

      It's "absurd" how death is taken so lightly and even joked about when the death had occurred so close to their home. The language "don't be so extravagant" used by Jose is intended to gaslight Laura into feeling that she is the person in the wrong. It's her fault that she is stopping the grand garden party. Shortly later in the text, the mom's response is just as "absurd" because her views directly align with Laura. Does she not sympathize with Scott at all? He had a wife and five kids..

    1. The Tiger: A True Story of Vengeance and Surival by John Vaillant Holy shit, this book is good. Just holy shit. Even if it was just the main narrative–the chase to kill a man-eating Tiger in Siberia in post-communist Russia–it would be worth reading, but it is so much more than that. The author explains the Russian psyche, the psyche of man vs predator, the psyches of primitive peoples and animals, in such a masterful way that you’re shocked to find 1) that he knows this, and 2) that he fit it all into this readable and relatively short book. You may have heard about the story on the internet a while back: a tiger starts killing people in Russia and a team is sent to kill it (Russia is so fucked up, they already have a team for this). At one point, the tiger is cornered and leaps to attack the team leader…and in mid-air the soldier’s rifle goes into the tigers open jaws and down his throat all the way to the stock, killing the tiger at the last possible second. The autopsy later revealed that the tiger had been shot something like a dozen times during its life and lived. The story is very similar to that of the Tsavo maneaters, which was turned into the underrated Val Kilmer movie The Ghost and the Darkness. There are all sorts of well-selected threads from evolutionary psychology and biology in this book and it makes the book a self-educator’s dream. You can pick and choose which ones you want to follow next–trusting safely that the author has pointed you in an interesting and valuable direction. But that’s just the meta-stuff that is a bonus with this book, and it’s worth pointing out only because the rest of the book is just so fucking interesting and exciting.

      Such an awesome book #toread

    1. If this state of flow is broken due to software being unresponsive or slow to respond to user input, they are distracted from the task at hand that they are trying to accomplish

      Slow software, by this definition, is both technically slow and design slow - if a design flow impedes the user from taking an action ergonomically, it's just as bad as if the program is not performant.

      Unfortunately, most programs handle both incorrectly - they don't provide an ergonomic way to accomplish a desired task, and they introduce a ton of computational latency when trying to do so. Latency is bad! But design is lower hanging fruit.

    1. While we don’t know which graph we want to use yet, we are debating between a line and a bar graph. We hope that our research findings will give us more clarity on which to use. For certain there will be at least three graphs: one that shows our public opinion findings, one that shows our private opinion findings, and one that combines the two.

      I like the topic you have chosen, as I have just recently picked up a book that touches on this as well. It's called ""Keep the Damned Women Out": The Struggle for Coeducation" By: Nancy Weiss Malkiel, and may be of use in this project. As for your presentation, perhaps one of our graphs could be one in which you use Voyant to graph how frequently certain words are used in your sources in comparison to the sources' date of publication. For example, maybe the word "feminism" starts to show up more after 1969, which would lead one to believe that as schools became co-ed, ideas of gender equality grew on these campuses.

    1. core conjecture of this line of work is that it's not just about the tools it's not just about our motivation it's about the infrastructure it's about the unit of analysis right why why does google scholar work
      • core conjecture : wrong "unit of analysis"
    1. If a person should want to commune with infinity once more, then it’s enough to go online.

      Did you not just identify the internet as something that gives us a sense of finitude and lack of variety?

    1. Participation inequality plagues the internet. Only 1% of people on any given platform create new content. 99% only consume.Many think that's just what happens when human communities scale. But maybe it's just what happens in an internet built for advertising. Consider that:All of the internet's interfaces—social feeds, search bars, news sites—are optimized for consumption.Interfaces for creating new content, particularly knowledge, are antiquated. Word-processors look like they did forty years ago, disconnected from the internet and any content you might write about. Which means: writing requires hours of searching and sorting. Knowledge creation is painful for the people best at it, and inaccessible to most others. What would it take to make writing accessible? Maybe: a totally new kind of interface. Ideally: a word-processor that pulls in the information you need as you type. And what would that take?Unprecedented NLP to make connections as you type,A word-processor redesigned around links, andA highly technical team focused on a non-technical market.If achieved, it would:save writers hours,make knowledge production accessible to anyone who knows how to type, andlay the groundwork for a mainstream knowledge economy.
    1. In addition, “negative results” can be published easily, helping avoid other researchers wasting time repeating analyses that will not return the expected results [DL10].

      It is debatable if negative results can be published easily. Of course they should, but recent studies have shown that it's often not achievable to get them published in peer-reviewed journals even if researchers are willing to. (For example here). Maybe I have just misunderstood the context of this sentence? Of course reproducibility makes it easier to publish both positive and negative results in general. But the linked article relates to making them publishable in reputed journals and that is still difficult in many fields.

    1. Author Response

      Reviewer #1 (Public Review):

      Bohère, Eldridge-Thomas and Kolahgar have studied the effect of mechanical signalling in tissue homeostasis in vivo, genetically manipulating the well known mechano-transductor vinculin in the adult Drosophila intestine. They find that loss of vinculin leads to accelerated, impaired differentiation of the enteroblast, the committed precursor of mature enterocytes, and stimulates the proliferation of the intestinal stem cell. This leads to an enlarged intestinal epithelium. They discriminate that this effect is mediated through its interaction with alpha-catenin and the reinforcement of the adherens junctions, rather than with talin and integrin-mediated interaction with the basal membrane. This results aligns well, as the authors note, with previous observations from Choi, Lucchetta and Ohlstein (2011) doi:10.1073/pnas.1109348108. Bohère et al then explore the impact that disrupting mechano-transduction has on the overall fitness of the adult fly, and find that vinculin mutant adult flies recover faster after starvation than wild types.

      The main conclusions of the paper are convincing and informative. Some important results would benefit from a more detailed description of the phenotypes, and others could have alternative explanations that would warrant some additional clarification.

      1) - Interpretation of phenotypes in vinc[102.1] mutants

      The paper presents several adult phenotypes of the homozygous viable, zygotic null mutant vinculin[102.1], where the fly gut is enlarged (at least in the R4/5 region). In many cases, they correlate this phenotype with that of RNAi knockdown of vinculin in the gut induced in adult stages. This is a perfectly valid approach, but it presents the difficulty of interpretation that the zygotic mutant has lacked vinculin throughout development and in every fly tissue, including the visceral mesoderm that wraps the intestinal epithelium and that also seems enlarged in the vinc[102.1] mutant. So this phenotype, and others reported, could arise from tissue interactions. To me, the quickest way to eliminate this problem would be to express vinculin in ISCs and/or EBs the vinc[102.1] background, either throughout development or after pupariation or emergence, and observe a rescue.

      We agree with the reviewer that we cannot exclude additional vinculin role(s) in other tissues during or after development that might have an impact on the intestinal epithelium. Our attempts to express a full-length Vinculin construct (Maartens et al, 2016) in the vinc102.1 flies, either in adulthood or throughout development, were not very conclusive: although we observed some degree of rescue, it was not fully penetrant. This was in contrast to the complete rescue observed with the genomic rescue of vinculin. Thus, it is possible that some form of tissue interaction contributes to the phenotype observed, for example if vinculin loss affects muscle structure. Alternatively, just like it was shown that too much active vinculin is detrimental to the fly (Maartens et al, 2016), our experiment suggests that too much vinculin may be deleterious to the intestine.

      In any case, because of cell-specific knockdowns in the adult gut, we are confident that EB reduction of vinculin levels or activity is sufficient to accelerate tissue turnover, at least in a specific portion of the posterior midgut. We have amended the text to acknowledge a role for tissue interactions (see page 6 (end of first paragraph), page 7 (start of last paragraph), page 12 (starvation experiments).

      An experiment where this is particularly difficult is with the starvation/refeeding experiment. The authors explored whether the disruption of tissue homeostasis, as a result of vinculin loss, matters to the fly. So they tested whether flies would be sensitive to starvation/re-feeding, where cellular density changes and vinculin mechano-sensing properties may be necessary. They correctly conclude that mutant flies are more resistant to starvation, and suggest that this may be due to the fact that intestines are larger and therefore more resilient. However, in these animals vinculin is absent in all tissues. It is equally likely that the resistance to starvation was due to the effect of Vinculin in the fat body, ovary, brain, or other adult tissues singly or in combination. The fact that the intestine recovers transiently to a size slightly larger than that of the fed flies seems anecdotal, considering the noise within the timeline of fed controls. I am not sure this experiment is needed in the paper at all, but to me, the healthy conclusion from this effort is that more work is needed to determine the impact of vinculin-mediated intestinal homeostasis in stress resistance, and that this is out of the scope of this paper.

      Please the new data presented in Figure 8A-B (text page 12).

      2) - Cell autonomy of the requirement of Vinculin and alpha-Catenin

      Authors interpret that Vinculin is needed in the EB to maintain mechanical contact with the ISC, restrict ISC proliferation through contact inhibition, and maintain the EB quiescent. This interpretation explains seemingly well the lack of an obvious phenotype when knocking down vinculin in ISCs only, while knockdown in ISCs and EBs, or EBs only, does lead to differentiation problems. It also sits well with the additional observation that vinculin knockdown in mature ECs does not have an obvious phenotype. However, a close examination makes the results difficult to explain with this interpretation only. If the authors were correct, one would expect that in mutant clones, eventually, vinculin-deficient EBs will be produced, which should mis-differentiate and induce additional ISC proliferation. However, the clones only show a reduction in ISC proportions; the most straight forward interpretation of this is that vinculin is cell-autonomously necessary for ISC maintenance (which is at odds with the phenotype of vinculin knockdown using the ISC and ISC/EB drivers).

      We apologise that we were unclear in the text. With hindsight, the confusion may have been caused by our describing the phenotype of MARCM clones before reporting the accumulation of EBs in the vinc102.1 guts. Therefore, we swapped these two sections and improved the description of these experiments in the manuscript (see section: “The pool of enterocyte progenitors expands upon vinculin depletion” pages 6-8).

      In brief, we do not think that our results are at odds with the phenotype of vinculin knockdown using the ISC and ISC/EB drivers - we realise the text was misleading and hope to have clarified our observations in the revised manuscript (pages 7 and 8). From cell conditional RNAi experiments, like the reviewer, we would predict that vinculin knockdown or loss of function in mitotic clones (MARCM experiments, Figure 4E-G) will induce accelerated differentiation of vinculin deficient enteroblasts, which in turn will increase proliferation. We observed that vinc102.1 or vinc RNAi mitotic clones contained similar number of cells compared to control clones, but reduced proportion of stem cells (Figure 4G). We interpret this as indicating that to maintain an equivalent clone size, stem cells must have divided more frequently, with some divisions producing two differentiated daughter cells. This type of symmetric division would increase the EB pool (as seen in Figure 4-figure supplement 2B), at the expense of the ISC population, in turn decreasing long term clonal growth potential. Altogether, the results obtained with MARCM clones highlight changes in tissue dynamics compatible with those observed with cell-specific vinculin knockdowns.

      Also, from the authors interpretation, it would follow induce that the phenotype of vinculin knockdown in ISCs+EBs and in EBs only should be the same. However, in ISCs+EBs vinculin knockdown, differentiation accelerates, which is likely accompanied by increased proliferation (judging by the increase in GFP area, PH3 staining would be more definitive).

      Indeed, the accelerated differentiation observed with esgGal4>UAS VincRNAi is accompanied by increased proliferation with the two independent RNAi lines used. We have added this result in Figure 1-figure supplement 1G (and in text, page 5).

      This contrasts with the knockdown only in EBs, which leads to accumulation of EBs due to misdifferentiation, and increased proliferation, mostly of ISCs, as measured directly with PH3 staining, but not additional late EBs or mature ECs. The authors call this "incomplete maturation due to accelerated differentiation". I think that one should expect to find incomplete differentiation/maturation when the rate of the process is very slow, not the other way around. To me, these are different phenotypes, which could perhaps be explained if vinculin was also needed in the ISC to transmit tension to the EB and prevent its differentiation, and removing it only in the EB may be revealing an additional, cell-autonomous requirement in maturation.

      When vinculin is knocked down in EBs, cells appear bigger than controls (as judged by the RFP+ nuclei in Figure 5E). This, compared to yw and vinc102.1 guts shown in Figure 4D suggests that these cells are more advanced in their differentiation. We have removed the sentence, to not confuse the reader, and clarified the text (see page 8). The discrepancy in the differentiation index between the esgGal4 and KluGal4 experiments might result from differences in the drivers, or an additional role of vinculin in EC differentiation, which we now mention in the text (page 8).

      So far, we have no evidence to support the idea that vinculin is also needed in the ISC to transmit tension to the EB and prevent its differentiation; for example, the lack of any phenotype when we knocked down vinculin specifically in ISCs (Figure 3) – notably, no increase in ISC ratio and no increase in cell density (unlike the reduction seen in Figure 1F with ISC+EB Knockdown).

      Another unexpected result, considering the authors interpretation, is that the over expression of activated Vinculin (vinc[CO]) does not seem to have much of an effect. It does not change the phenotype of the wild type (where there is very little basal turnover to begin with) and it only partially rescues the phenotype of the vinc[102.1] mutants, when the rescue transgene vinc:RFP does. This again suggests that there may be tissue interactions, in development or adulthood, that may explain the vinc[102.1] phenotypes. It could also be that this incomplete rescue is due to the deleterious effect of Vinc[CO]; this is another reason for doing the vinc[102.1]; esg-Gal4; UAS-vincFL experiments suggested above). An alternative experiment to perform this rescue would be to knock down vinculin gene while overexpressing the Vinc[CO] transgene - this may be possible with the RNAi HSM02356, which targets the vinculin 3'UTR and is unlikely to affect UAS-vinc[CO].

      Please refer to essential point 2c; as VincCO is not a simple overactive protein, like a constitutively active kinase, additional effects in the tissue can be expected.

      The claims of the authors would be more solid if the reporting of the phenotypes was more homogeneous, so one could establish comparisons. Sometimes conditions are analysed by differentiation index, others by extension of the GFP domains, others with phospho-histone-3 (PH3), others through nuclear size or density, and combinations. I do not think the authors should evaluate all these phenotypes in all conditions, but evaluating mitotic index and abundance of EBs and "activated EBs/early ECs" to monitor proliferation and differentiation rates should be done across the board (ISC, ISC+EB, EB drivers).

      To improve consistency, in all conditions we have compared cell types ratios and cellular density upon vinculin knockdown: see Figure 1E-F for ISC+EB, Figure 3B-C for ISC, and Figure 5 – figure supplement 1C-E for EB (with panel E newly added). As we did not observe any effect on ratio or density, we did not monitor cell proliferation for ISC knockdown.

      Nonetheless, we added the mitotic index for the ISC+EB driver (new Figure 1- figure supplement 1G) to be consistent with the results from the EB driver (Figure 5- figure supplement 1C).

      If the primary role of Vinculin is to induce contact inhibition in the ISC from the EB and prevent the EB differentiation and proliferation, one would expect that over expression of Vinc[CO] (or perhaps VincFL or sqhDD) in EBs should prevent or delay the differentiation and proliferation induced by a presumably orthogonal factor, like infection with Pseudomonas entomophila or Erwinia carotovora.

      This is indeed an exciting prediction, but outside the scope of this manuscript.

      3) - Relationship between Vinculin and alpha-Catenin

      The authors establish a very clear difference in the phenotypes between focal adhesion components and Vinculin, whereas the similarity of alpha-catenin and vinculin knockdowns is very compelling. Therefore I am sure the authors are in the right path with their interpretation of this part of the paper. However, some of the alpha-Catenin experiments are not very clear. The result from the rescue experiment of alpha-Cat knockdown with alpha-Cat-deltaM1b does not seem to show what the authors claim, and differentiation does not seem affected, only the amount of extant older ECs (which may be due to other reasons as this is a non-autonomous effect).

      Like the reviewer, we were surprised about the milder rescue with M1b compared to M1a and are unsure of the reasons for this. Nevertheless, quantifications of the differentiation and retention indices show significant differences for M1a and M1b compared to the FL control (Figure 6F-G), with phenotypes resembling the vinc knockdown. In Figure 6E, we have added a row of zoomed views to better highlight the similarity of phenotype between M1a and M1b and have acknowledged the mild differences in the text (bottom of page 9). For the sake of rigour, we think it is important to include results from both M1 deletions, even if there is not yet a logical reason to explain why they have different effects.

      Ulrich Tepass produced a UAS-alpha-catenin construct with the full deletion of the M1 region, perhaps that would show a clearer phenotype.

      This is a good suggestion, however for technical reasons this is not possible. The strategy devised by Ken Irvine and his group relies on rescuing the RNAi with an RNAi resistant construct, which is not the case for the constructs generated in the Tepass lab. Furthermore, we cannot adopt a MARCM strategy as -cat is too close to the centromere (80F).

      Also, the autonomy of the phenotype is difficult to address with these experiments alone. It would be expected that the phenotype of alpha-catenin knockdown should be similar to that of vinculin knockdown in the ISCs only or EBs only.

      This is not what our understanding of cadherin-mediated adhesion would predict. Forming cadherin adhesions requires cadherins and catenins in both cells, so we would expect similar phenotypes in ISCs only and EBs only. What is exciting about our findings is that the mechanosensitive machinery is not equally important in the two adherent cells, i.e. the EB is using vinculin to measure force on the contact and regulate differentiation, whereas the ISC needs to resist that force, but does not use vinculin to sense that force and regulate its behaviour.

      We have added new data showing the role of the vinculin/α-catenin interaction in ISCs or EBs by co-expressing α-Cat RNAi and α-Cat ΔM1a. We observed that absence of VBS in α-catenin has no effect in ISCs but promotes EB differentiation and increase in numbers (new Figure 6 – figure supplement 2), similar to our observations with vincRNAi (see text page 10).

      Reviewer #2 (Public Review):

      Vinculin functions as an important structural bridge that connects cadherin and integrin-mediated adhesions to the F-actin cytoskeleton. This manuscript carefully examined the mutant phenotype of vinc in the Drosophila intestine and found that vinc mutant in EBs causes significant increases of EB to EC differentiation, stem proliferation, and tissue growth. By analyzing the mutant phenotype of the cadherin adaptor alpha-catenin, the authors suggest that the vinc functions through the cell-cell junctions instead of cell-CEM adhesions in EBs. Finally, manipulation of myosin activity in EBs phenocopies the vinc mutant, suggesting that vinculin is regulated by the mechanical tension transduced through the cytoskeleton.

      The authors claim that the vinculin mutant phenotype is opposite compared to the loss of the major integrin components, suggesting a function independent of the cell-ECM adhesions. However, the phenotype of vinc and integrin may not be completely opposite. Besides loss of ISCs, both mys and talin knockdown in ISCs clearly causes ISCs differentiation into EC cells (Fig.3A), suggesting a possible involvement of integrin in EB to EC differentiation. Therefore, it will be important to test the phenotype of integrin KD in EBs using EB-specific Gal4.

      The reviewer raised an important point. To test this we had to overcome the ISC defect of mys or talin RNAi, and specifically tested their function in enteroblasts using the KluGal4 driver. This revealed a similar phenotype of accelerated differentiation, assayed with the ReDDM system (see new Figure 6 -figure supplement 4). Thus, as the reviewer suggested both integrins and cadherins function in this process, we have amended the text to indicate this (see page 10, and sentence in the discussion page 12). It appears however that, unlike vinculin, they also have a key role in ISCs.

      The authors proposed a model that the cell-cell adhesion between ISC and EBs is required for vinculin mediated differentiation suppression. However, this model is not directly supported by the data as the EB-ISC adhesion and EB-EC adhesion have not been tested separately.

      This is an important point and we have amended the text to address this.

      We have focussed our model on EB-ISC adhesion as the adherens junctions are stronger between progenitor cells than EBs-ECs, and because of previous data from the Ohlstein lab (Choi et al, 2011) demonstrating the relationship between adherens junction stability and EB differentiation/ISC proliferation. Nonetheless we agree it is possible that EB-EC adhesion might contribute to this mechanism and have modified the last sentence of the result section (page 12) and the legend associated to the model (Figure 8) to take this into account.

      In addition, previous short-term manipulation of E-cadherin in ISCs and EBs shows no change in cell proliferation (Liang J. et al. 2017), which seems to contradict the authors' model. To support the authors' conclusion, long-term manipulation of E-cadherin in ISCs and EBs must be tested.

      A main feature of the vinculin phenotype is the regional accelerated differentiation observed in R4/5, potentially reflecting areas more subject to mechanical forces. Strikingly, this accelerated differentiation is rarely observed more anteriorly (such as region R4a/b studied in Liang et al, 2017). In fact, these regional differences were previously reported with E-cadherin knockdown by the Adachi-Yamada group (see Figure S1, Maeda et al, 2008). This highlights the importance of considering regional control of cell fate for the field.

      To test our hypothesis further, we have knocked down E-cadherin and α-catenin in EBs only (with Klu-Gal4). As shown in new Figure 6-figure supplement 3, we observed an accumulation of EBs as early as 3 days after induction, reminiscent of vinculin loss of function phenotype. Longer E-cadherin EB knock-down with KluGal4 appears particularly detrimental for survival as all flies died after 4 days of continuous RNAi expression preventing any further observations (see new text page 10). These observations support our model that junctional stability slows down EB differentiation. Our results are also in agreement with the work described in Choi et al (2011), whereby after 6 days of E-Cadherin RNAi expression in progenitors or EBs (using a different driver from us, Su(H)Gal4), the mitotic index increases, showing a feedback regulation on ISC proliferation. Therefore, our work and the Liang et al 2017 study are not in fact contradictory: the differences in the contribution of junctions to tissue dynamics might reflect the variety of molecular mechanisms involved along the small intestine.

      The result of MARCM analysis seems inconsistent with the rest of the data. In MARCM, no significant change of clone sizes is observed between WT and vinc mutant (Fig. 3E). However, vinc mutant in EBs clearly promotes ISC proliferation in other experiments such as esg>vinc-RNAi and the EB>vinc-RNAi (Fig. 1A, Fig. 4).

      Please refer to point 2a, essential revisions. We do not think that our results are at odds with the phenotype of vinculin knockdown using the ISC and ISC/EB drivers - we realise the text was misleading and hope to have clarified our observations in the revised manuscript (pages 7-8).

      In Fig. 4H, the authors suggest that vinculin mutant prevent terminal EC formation. However, this may be simply caused by longer retention of Klu expression in the newborn ECs. To test if EB differentiation is indeed affected, the EC marker pdm1 staining will provide more convincing evidence. Another experiment to strengthen the conclusion will be the tracking of clone sizes generated from a single EB cell using the UAS-Flp system (such as G-trace).

      These are good suggestions to strengthen our findings. Unfortunately, we have not managed to obtain a working Pdm1 antibody (or other commercially available EC marker), which is why we assayed nuclear size and the tracking of KluReDDM cells. Therefore, we have not been able to test if Klu is retained in newborn ECs.

      As we agree this section of the text was misleading, we have rephrased and highlighted that the phenotype seen with KluGal4ReDDM resembles the accumulation of activated EBs and newborn ECs observed in vinc102.1 guts. (page 8).

      In Fig. 6D, the survival rate of WT and vinc mutant flies were compared. However, as there is no additional assay about the feeding behavior or metabolic rate, the systematic mutant of vinc does not provide a direct link between animal survival and intestinal EBs. Therefore, an experiment with vinc level specifically manipulated in fly intestine using esg>vinc-RNAi or BE>vinc-RNAi will be more relevant.

      This experiment has now been added in Figure 8B and the text modified to acknowledge the limitations of the survival experiments with whole mutant flies (see point 3, essential revisions above).

      Reviewer #3 (Public Review):

      Prior work had identified essential roles for Integrin signaling in regulating intestinal stem cell (ISC) proliferation, and the authors studies were motivated by trying to understand whether Vinculin (Vinc) might participate in this. However, Vinc is involved in mechanotransduction at both focal adhesions (FA) and adherens junctions (AJ), and their results revealed that Vinc phenotypes do not match those of FA proteins like Integrin. Conversely, they do match a-catenin (a-cat) RNAi phenotypes, and together with the localization of Vinc and the phenotypes associated with a-cat mutants that can't bind Vinc, this led to the conclusion that Vinc is acting at AJ rather than FA in this tissue. The results here are convincing, with clear presentation, nice images, and appropriate quantitation. It's also worth emphasizing that initial characterization of Vinc mutant flies failed to reveal any essential roles for this protein in Drosophila, so finding a mutant phenotype of any sort is significant.

      While the manuscript is strong as a descriptive report on the requirement for Vinc in the Drosophila intestine, it doesn't provide us with much understanding of the mechanism by which Vinc exerts its effects, nor how its requirement is linked to intestinal physiology.

      There is always more to learn, and the importance of our work so far is that it demonstrates a very specific role for vinculin as a mechanoeffector in regulating cell fate decisions in specific regions of the midgut, and provide the foundation for future work addressing the detailed mechanism of this function and physiological role.

      Prior work has shown that mechanical stretching of intestines stimulates ISC proliferation (presumably through Integrin signaling), which is opposite to what Vinc does here.

      We would like to stress that very little mechanistic knowledge is available regarding how mechanical stretching stimulates ISC proliferation, in Drosophila or mammalian systems. To our knowledge, the only work linking gut mechanical stretching to cell fate decisions in Drosophila identified Msn/Hippo pathway (Li et al., 2018) and the ion channel Piezo requirement (He et al., 2018). We agree with the reviewer that integrin signaling would most likely contribute, especially given the composition of gels for organoid cultures (Gjorevski et al, 2016), yet the actual molecular mechanisms remain to be elucidated.

      There is a suggestion that Vinc is involved in maintaining homeostasis, but how its regulated remains a bit murky. The authors report that reductions in myosin activity result in phenotypes reminscent of Vinc phenotypes, which they interpret as supporting a model where Vinc's role is to help maintain tension at AJ. Of course it could also be reversed - maybe they are similar because tension is needed to maintain Vinc recruitment to AJ? They lack of epistasis tests and lack of analysis of whether Vinc localization to AJ in EBs is affected by tension or the M2 deletion of a-cat leaves us uncertain as to the actual basis for the relationship between Vinc and myosin phenotypes.

      Thank you for all these suggestions. New experiments have been done to test the relationship between cellular tension and vinculin at junctions (see essential point 1).

    1. Author Response:

      Reviewer #3 (Public Review):

      Murphy et al. further develop the linked selection model of Elyashiv et al. (2016) and apply it to human genetic variation data. This model is itself an extension of the McVicker et al. (2009) paper, which developed a statistical inference method around classic background selection (BGS) theory (Hudson and Kaplan, 1995, Nordborg et al., 1996). These methods fit a composite likelihood model to diversity data along the chromosome, where the level of diversity is reduced by a local factor from some initial "neutral" level π0 down to observed levels. The level of reduction is determined by a combination of both BGS and the expected reduction around substitutions due to a sweep (though the authors state that these models are robust to partial and soft sweeps). The expected reduction factor is a function of local recombination rates and genomic annotation (such as exonic and phylogenetically conserved sequences), as well as the selection parameters (i.e. mutation rates and selection coefficients for different annotation classes). Overall, this work is a nice addition to an important line of work using models of linked selection to differentiate selection processes. The authors find that positive selection around substitutions explains little of the variation in diversity levels across the genome, whereas a background selection model can explain up to 80% of the variance in diversity. Additionally, their model seems to have solved a mystery of the McVicker et al. (2009) paper: why the estimated deleterious mutation rate was unreasonably high. Throughout the paper, the authors are careful not only in their methodology but also in their interpretation of the results. For example, when interpreting the good fit of the BGS model, the authors correctly point out that stabilizing selection on a polygenic trait can also lead to BGS-like reductions.

      Furthermore, the authors have carefully chosen their model's exogenous parameters to avoid circularity. The concern here is that if the input data into the model - in particular the recombination maps and segments liked to be conserved - are estimated or identified using signals in genetic variation, the model's good fit to diversity may be spurious. For example, often recombination maps are estimated from linkage disequilibrium (LD) data which is itself obtained from variation along the chromosome. Murphy et al. use a recombination map based on ancestry switches in African Americans which should prevent "information leakage" between the recombination map and the BGS model from leading to spuriously good fits. Likewise, the authors use phylogenetic conservation maps rather than those estimated from diversity reductions (such as McVicker et al.'s B maps) to avoid circularity between the conserved annotation track and diversity levels being modeled. Additionally, the authors have carefully assessed and modified the original McVicker et al. algorithm, reducing relative error (Figure A2).

      One could raise the concern that non-equilibrium demography confounds their results, but the authors have a very nice analysis in Section 7 of the supplementary material showing that their estimates are remarkably stable when the model is fit separately in different human populations (Figure A35). Supporting previous work that emphasizes the dependence between BGS and demography, the authors find evidence of such an interaction with a clever decomposition of variance approach (Figure A37). The consistency of BGS estimates across populations (e.g. Figures A35 and A36) is an additional strong bit of evidence that BGS is indeed shaping patterns of diversity; readers would benefit if some of these results were discussed in the main text.

      We appreciate the reviewer’s kind remarks. With regards to the results included in the main text vs the supplement, we attempted to strike a balance between having the main text remain communicative to a larger readership and providing experts with details they may find useful. We have, however, done our best for the supplementary analyses to be written clearly.

      I have three major concerns about this work. First, it's unclear how accurate the selection coefficient estimates are given the non-equilibrium demography of humans (pre-Out of Africa split, and thus not addressed by the separate population analyses). The authors do not make a big point about the selection coefficient estimates in the main section of the paper, so I don't find this to be a big problem. Still, some mention of this issue might be helpful to readers trying to interpret the results presented in the supplementary text.

      As the reviewer notes, we chose not to emphasize the inferred distributions of selection coefficients. Our main reason for this choice is the technical issue addressed in Appendix Section 1.5 (L561-564): “Second, thresholding potentially biases our estimates of the distribution of selection effects. While this bias is probably smaller than the bias without thresholding, its form and magnitude are not obvious. This is why we decided not to report the inferred distributions of selection effects in the Main Text.” We agree that if we were to focus on our estimates of the distribution of selection effects, the effects of demographic history would also need to be considered. This is, however, not the focus here.

      Second, I'm curious whether the composite likelihood BGS model could overfit any variance along the chromosome - even neutral variance. At some level, the composite likelihood approach may behave like a sort of smoothing algorithm, albeit with a functional form and parameters of a BGS model. The fact that there is information sharing across different regions with the same annotation class should in principle prevent overfitting to local noise. Still, there are two ways I think to address this overfitting concern. First, a negative neutral control could help - how much variation in diversity along the chromosome can this model explain in a purely neutral simulation? I imagine very little, likely less than 5%, but I think this paper would be much stronger with the addition of a negative control like this. Second, I think the main text should include the R2 values from out-sample predictions, rather than just the R2 estimates from the model fit on the entire data. For example, one could fit the model on 20 chromosomes, use the estimated θΒ parameters to predict variation on the remaining two. The authors do a sort of leave-one-out validation at the window level (Figure A31); however, this may not be robust to linkage disequilibrium between adjacent windows in the way leaving out an entire chromosome would be.

      The two requested analyses were done and their results are described above, in response to essential revisions (p. 2-3 here). In brief, there is no overfitting of neutral patterns or otherwise. We elaborate on why this finding is expected below.

      Finally, I feel like this paper would be stronger with realistic forward simulations. The deterministic simulations described in the supplementary materials show the implementation of the model is correct, but it's an exact simulation under the model - and thus not testing the accuracy of the model itself against realistic forward simulations. However, this is a sizable task and efforts to add selection to projects like Standard PopSim are ongoing.

      We agree that forward simulations would be a nice addition, but believe that it is a project in itself. Indeed, a major complication is that when, for computational tractability, purifying selection is simulated in small populations with realistic population-scaled parameters, the reduction in diversity due to selection at unlinked sites has a major effect on neutral diversity levels (see, e.g., Robertson 1961). We hope to address this issue in future work. Meanwhile, we note that the theory that we rely on has been tested against simulations in the past (e.g., Charlesworth et al., 1993; Hudson and Kaplan, 1995; Nordborg et al., 1996).

    1. von neumann was furious at him furious that he would waste precious machine time 00:04:20 doing the assembly that was clerical work that was supposed to be for people right and so we saw the same story happened just a little bit later when john backus and friends came up with us idea they called fortran this is so call high-level language where you could write out your formulas as if your writing mathmatical notation you could write out loops and this was shown to the assembly programmers and once again they just 00:04:46 they weren't interested they don't see any value in that they just didn't get it so um I want you to keep this in mind as I talk about the four big ideas that I'm going to talk about today that it's easy to think that technology technology is always getting better because Moore's law because computers are getting always more capable but ideas that require people to unlearn what they've learned and think in new ways there's often 00:05:10 enormous amount of resistance people over here they think they know what they're doing they think they know a programming is this programming that's not programming and so there's going to be a lot of resistance to adopting new ideas

      Cumulative cultural learning seems to be stuck in its own recursive loop- the developers of the old paradigm become the old "guard", resistant to any change that will disrupt their change. Paradigm shifts are resisted tooth and nail.

    1. expecting you. I mean, why else would it suddenly have Kreacher see to the grand piano in the 2nd-floor parlour just the other day? It’s been played on before, with varying degrees of questionable talent and with no measures taken by neither house nor elf. Now, though, it’s suddenly tuned to perfection and just standing here, ready and waiting for you to come play it. I swear, if I ever get on good terms with this house, it’ll be none too soon.

      i could die my god they’re soulmates and the HOUSE knows that horrible awful house but still

    2. rather that you told me at all. And how. Godric. The sincerity. The raw honesty. Knowing you —  who I once used to think of as someone who’d do anything to hide his emotions at any cost, who I once used to love riling up at every opportunity, just for the chance to get a glimpse of any feelings hidden behind that solid mask of entitlement, disdain, and contempt, knowing you — this level of openness isn’t something you’ve practised often, if ever, and finding myself on the receiving end of your candid words, of your trust, it’s kinda a little breathtaking, to be honest.

      i am blushing for him

    1. Well, I must admit, the first time I woke up hard after dreaming of kissing you, it kind of freaked me out a little bit. Okay, to be perfectly honest, it freaked me out quite a lot. Especially since, well, what you told me it might represent, dreaming of someone on the night of the Summer Solstice? It all struck me out of the blue that night, and since I wasn’t at all prepared, well, I guess you can imagine the shock. The disbelief. The confusion. I’ve had months to process it since then, though, countless dreams to get used to the idea, and just so you know, it doesn’t freak me out anymore. Not even a little bit. If anything, it turns me on it intrigues me. And your sketches haven’t exactly lessened the allure.

      okay i like this story but this wasn’t like the admission i wanted from him? like maybe it’s too much to say they’re in love and maybe it’s bc i’m not someone who wants to focus on the physical… but idk man someone telling me they’re hard from dreaming of kissing me instead of oh hey i think i love you? or at least really like you? and also would like to smooch your face?? idk i love the summer solstice thing though

    1. with established worldwide fame and prestige, to step in his previous successes to write more-of-the-same books and convert all the attention in cheap money. Just like Robert Kiyosaki did with his 942357 books about “Rich dad”.

      Many artists fall into a creativity trap caused by fame. They spend years developing a great work, but then when it's released, the industry requires they follow it up almost immediately with something even stronger.

      Jewel is an reasonable and perhaps typical example of this phenomenon. She spent several years writing the entirety of her first album Pieces of You (1995), which had three to four solid singles. As it became popular she was rushed to release Spirit (1998), which, while it was ultimately successful, didn't measure up to the first album which had far more incubation time. She wasn't able to build up enough material over time to more easily ride her initial wave of fame. Creativity on demand can be a difficult master, particularly when one is actively touring or supporting their first work while needing to

      (Compare the number of titles she self-wrote on album one vs. album two).

      M. Night Shyamalan is in a similar space, though as a director he preferred to direct scripts that he himself had written. While he'd had several years of writing and many scripts, some were owned by other production companies/studios which forced him to start from scratch several times and both write and direct at the same time, a process which is difficult to do by oneself.

      Another example is Robert Kiyosaki who spun off several similar "Rich Dad" books after the success of his first.

      Compare this with artists who have a note taking or commonplacing practice for maintaining the velocity of their creative output: - Eminem - stacking ammo - Taylor Swift - commonplace practice

    1. looking out for me, I know that, and I should be nothing but grateful for their care and concern. Yet, sometimes their well-meaning coddling makes me want to throw a tantrum. Especially when they gang up on me like that. When they do, I almost get the feeling they’re slipping into parental mode.

      same harry i feel this way all the time, maybe it’s our leo pride, idk but i think it’s okay to be annoyed by it. if it makes us bratty sometimes so be it. we’re independent, we don’t need to be coddled, we want to be seen and acknowledged and asked what we want and need so that we can tell you without having to just tell you

    1. Always with an important lens – our friend, Jens Nordvig reminds us – “foreign involvement is small in China. It is true that the high-yield bond market has a sizable USD component (mostly foreign). But relative to the US, where subprime exposure was sold around the world, it is a much more local (controllable) system.” It has been clear for months, there is Evergrande credit contagion – it’s just inside China at the moment.
    1. Faculty are expected to respond to students’ questions within 24 hours via email or some other communication method and to grade and provide substantive individualized feedback on assignments within seven days. In the discussion forum, instructors are asked to respond to three or four student posts each week and then to summarize the outcome of the discussion in an announcement at the end of the week. “It’s not a tremendous amount of work, but it does let the students know that the instructor is in the classroom versus just coming in and checking a couple times over the course of the semester,” Litt says.

      This seems a little excessive, even for a class of 40/24. I try to give most students at least one piece of "how to improve" on each assignment, but I can't always manage this.

    1. Author Response

      Reviewer #1 (Public Review):

      The relationship between genetic disease and adaptation is important for biomedical research as well as understanding human evolution. This topic has received considerable attention over the past several decades in human genetics research. The present manuscript provides a much more comprehensive and rigorous analysis of this topic. Specifically, the authors select a set of ~4000 human Mendelian disease genes and examine patterns of recent positive selection in these genes using the iHS and nSL tests (both haplotype test) for selection. They then compare the signals of sweeps to control genes. Importantly, they match the control set to the disease genes based upon many different genomic variables, such as recombination rate, amount of background selection, expression level, etc. The authors find that there is a deficit of selective sweeps in disease genes. They test several hypotheses for this deficit. They find that the deficit of sweeps is stronger in disease genes at low recombination rate and those that have more disease mutations. From this, the authors conclude that strongly deleterious mutations could be impeding selective sweeps.

      Strengths

      The manuscript includes a number of important strengths:

      1) It tackles an important question in the field. The question of selection in disease genes has been very well-studied in the past, with conflicting viewpoints. The present study examines this topic in a rigorous way and finds a deficit of sweeps in disease genes.

      2) The statistical analyses are rigorously done. The genome is a confusing place and there can often be many reasons why a certain set of genes could differ from another set of genes, unrelated to the variable of interest. Di et al. carefully match on these genomic confounders. Thus, they rigorously demonstrate that sweeps are depleted in disease genes relative to control genes. Further, the pipeline for ranking the genes and testing for significance is solid.

      3) The Introduction of the manuscript nicely relates different evolutionary models and explanations to patterns that could be seen in the data. As such, the present manuscript isn't just merely an exploratory analysis of patterns of sweeps in disease genes. Rather, it tests specific evolutionary scenarios.

      Weaknesses

      1) The authors did not discuss or test a basic explanation for the deficit of sweeps in disease genes. Namely, certain types of genes, when mutated, give rise to strong Mendelian phenotypes. However, mutations in these genes do not result in variation that gives rise to a phenotype on which positive selection could occur. In other words, there are just different types of genes underlying disease and positive selection. I could think that such a pattern would be possible if humans are close to the fitness optimum and strong effect mutations (like those in Mendelian disease genes) result in moving further away from the fitness optimum. On the other hand, more weak effect mutations could be either weakly deleterious or beneficial and subject to positive selection. I'm not sure whether these patterns would necessarily be captured by the overall measures of constraint which the disease and non-disease genes were matched on.

      We thank the reviewer for suggesting that alternative explanation. It is indeed important that we compare it with our own explanation. To rephrase the reviewer’s suggestion, it is possible that disease genes may just have a different distribution of fitness effects of new mutations. Specifically, mutations in disease genes might have such large effects that they will consistently overshoot the fitness optimum, and thus not get closer to this optimum. This would prevent them from being positively selected. Two predictions can be derived from this potential scenario. First, we can predict a sweep deficit at disease genes, which is what we report. Second, we can also predict that disease genes should exhibit a deficit of older adaptation, not just recent adaptation detected by sweep signals. Indeed, the decrease in adaptation due to (too) large effect mutations would be a generic, intrinsic feature of disease genes regardless of evolutionary time. This means that under this explanation, we expect a test of long-term adaptation such as the McDonald-Kreitman test to also show a deficit at disease genes.

      This latter prediction differs from the prediction made by our favored explanation of interference between deleterious and advantageous variants. In this scenario, the sweep deficit at disease genes is caused by the presence of deleterious, and most importantly currently segregating disease variants. Because the presence of the segregating variants is transient during evolution, our explanation does not predict a deficit of long-term adaptation. We can therefore distinguish which explanation (the reviewer’s or ours) is the most likely based on the presence or absence of a long-term adaptation deficit at disease genes.

      To test this, we now compare protein adaptation in disease and control genes with two versions of the MK test called ABC-MK and GRAPES (refs). ABC-MK estimates the overall rate of adaptation, and also the rates of weak and strong adaptation,and is based on Approximate Bayesian Computation. GRAPES is based on maximum likelihood. Both ABC-MK and GRPES have shown to provide robust estimates of the rate of protein adaptation thanks to evaluations with forward population simulations (refs). We find no difference in long-term adaptation between disease and control non-disease genes, as shown in new figure 4. This shows that the explanation put forward by the reviewer of an intrinsically different distribution of mutation effects at disease genes is less likely than an interference between currently segregating deleterious variants with recent, but not with older long-term adaptation. We even show in the new figure 4 that disease genes and their controls have more, not less strong long-term adaptation compared to the whole human genome baseline (new figure 4C). Also, disease genes in low recombination regions and with many disease variants have experienced more, not less strong long-term adaptation than their controls. Therefore, far from overshooting the fitness optimum due to stronger fitness effects of mutations, it looks like that these stronger fitness effects might in fact be more frequently positively selected in these disease genes.

      We now provide these new results P15L418:<br /> “Disease genes do not experience constitutively less long-term adaptive mutations<br /> A deficit of strong recent adaptation (strong enough to affect iHS or 𝑛𝑆!) raises the question of what creates the sweep deficit at disease genes. As already discussed, purifying selection and other confounding factors are matched between disease genes and their controls, which excludes that these factors alone could possibly explain the sweep deficit. Purifying selection alone in particular cannot explain this result, since we find evidence that it is well matched between disease and control genes (Figures 2 and Figure 4-figure supplement 1). Furthermore, we find that the 1,000 genes in the genome with the highest density of conserved elements do not exhibit any sweep deficit (bootstrap test + block-randomized genomes FPR=0.18; Methods). Association with mendelian diseases, rather than a generally elevated level of selective constraint, is therefore what matters to observe a sweep deficit. What then might explain the sweep deficit at disease genes?

      As mentioned in the introduction, it could be that mendelian disease genes experience constitutively less adaptive mutations. This could be the case for example because mendelian disease genes tend to be more pleiotropic (Otto, 2004), and/or because new mutations in mendelian are large effect mutations (Quintana-Murci, 2016) that tend to often overshoot the fitness optimum, and cannot be positively selected as a result. Regardless of the underlying processes, a constitutive tendency to experience less adaptive mutations predicts not only a deficit of recent adaptation, but also a deficit of more long-term adaptation during evolution. The iHS and nSL signals of recent adaptation we use to detect sweeps correspond to a time window of at most 50,000 years, since these statistics have very little statistical power to detect older adaptation (Sabeti et al., 2006). In contrast, approaches such as the McDonald-Kreitman test (MK test) (McDonald and Kreitman, 1991) capture the cumulative signals of adaptative events since humans and chimpanzee had a common ancestor, likely more than six million years ago. To test whether mendelian disease genes have also experienced less long-term adaptation, in addition to less recent adaptation, we use the MK tests ABC-MK (Uricchio et al., 2019) and GRAPES (Galtier, 2016) to compare the rate of protein adaptation (advantageous amino acid changes) in mendelian disease gene coding sequences, compared to confounding factors-matched non-disease controls (Methods). We find that overall, disease and control non-disease genes have experienced similar rates of protein adaptation during millions of years of human evolution, as shown by very similar estimated proportions of amino acid changes that were adaptive (Figure 5A,B,C,D,E). This result suggests that disease genes do not have constitutively less adaptive mutations. This implies that processes that are stable over evolutionary time such as pleiotropy, or a tendency to overshoot the fitness optimum, are unlikely to explain the sweep deficit at disease genes. If disease genes have not experienced less adaptive mutations during long-term evolution, then the process at work during more recent human evolution has to be transient, and has to has to have limited only recent adaptation. It is also noteworthy that both disease genes and their controls have experienced more coding adaptation than genes in the human genome overall (Figure 5A), especially more strong adaptation according to ABC-MK (Figure 5C). The fact that the baseline long-term coding adaptation is lower genome-wide, but similarly higher in disease and their control genes, also shows that the matched controls do play their intended role of accounting for confounding factors likely to affect adaptation. The fact that long-term protein adaptation is not lower at disease genes also excludes that purifying selection alone can explain the sweep deficit at disease genes, because purifying selection would then also have decreased long-term adaptation. A more transient evolutionary process is thus more likely to explain our results.”

      Then P22L613: “More importantly, the fact that constitutively less adaptation at disease genes combined to more power to detect sweeps in low recombination regions does not explain our results, is made even clearer by the fact that disease genes in low recombination regions and with many disease variants have in fact experienced more, not less long-term adaptation according to an MK analysis using both ABC-MK and GRAPES (Figure 5F,G,H,I,J). ABC-MK in particular finds that there is a significant excess of long-term strong adaptation (Figure 4H, P<0.01) in disease genes with low recombination and with many disease variants, compared to controls, but similar amounts of weak adaptation (Figure 5G, P=0.16). It might be that disease genes with many disease variants are genes with more mutations with stronger effects that can generate stronger positive selection. The potentially higher supply of strongly advantageous variants at these disease genes makes it all the more notable that they have a very strong sweep deficit in recent evolutionary times. This further strengthens the evidence in favor of interference during recent human adaptation: the limiting factor does not seem to be the supply of strongly advantageous variants, but instead the ability of these variants to have generated sweeps recently by rising fast enough in frequency.”

      2) While I think the authors did a superb job of controlling for genome differences between disease and non-disease genes, the analysis of separating regions by recombination rate and number of disease mutations does not seem as rigorous. Specifically, the authors tested for enrichment of sweeps in disease genes vs control and then stratified that comparison by recombination rate and/or number of disease mutations. While this nicely matches the disease genes to the control genes, it is not clear whether the high recombination rate genes differ in other important attributes from the low recombination rate genes. Thus, I worry whether there could be a confounder that makes it easier/harder to detect an enrichment/deficit of sweeps in regions of low/high recombination.

      We thank the reviewer for emphasizing the need for more controls when comparing our results in low or high recombination regions. We have now compared the confounding factors between low recombination disease genes and high recombination disease genes, as classified in the manuscript. As shown in new supp table Figure 6 figure supplement 1, confounding factors do not differ substantially between low and high recombination disease genes, and are all within a range of +/- 25% of each other. It would take a larger difference for any confounding factor to explain the sharp sweep deficit difference observed between the low and high recombination disease genes. The only factor with a 35% difference between low and high recombination mendelian disease genes is McVicker’s B, but this is completely expected; B is expected to be lower in low recombination regions.

      We now write P20L569: “Further note that only moderate differences in confounding factors between low and high recombination mendelian disease genes are unlikely to explain the sweep deficit difference (Figure 6-figure supplement 1).”

      Regarding the potential confounding effect of statistical power to detect sweeps differing in low and high recombination regions, please see our earlier response to main point 2.

      Reviewer #2 (Public Review):

      This paper seeks to test the extent to which adaptation via selective sweeps has occurred at disease-associated genes vs genes that have not (yet) been associated with disease. While there is a debate regarding the rate at which selective sweeps have occurred in recent human history, it is clear that some genes have experienced very strong recent selective sweeps. Recent papers from this group have very nicely shown how important virus interacting proteins have been in recent human evolution, and other papers have demonstrated the few instances in which strong selection has occurred in recent human history to adapt to novel environments (e.g. migration to high altitude, skin pigmentation, and a few other hypothesized traits).

      One challenge in reading the paper was that I did not realize the analysis was exclusively focused on Mendelian disease genes until much later (the first reference is not until the end of the introduction on pages 7-8 and then not at all again until the discussion, despite referring to "disease" many times in the abstract and throughout the paper). It would be preferred if the authors indicated that this study focused on Mendelian diseases (rather than a broader analysis that included complex or infectious diseases). This is important because there are many different types of diseases and disease genes. Infectious disease genes and complex disease genes may have quite different patterns (as the authors indicate at the end of the introduction).

      We want to apologize profusely for this avoidable mistake. We have now made it clearer from the very start of the manuscript that we focus on mendelian non-infectious disease genes. We have modified the title and the abstract accordingly, specifying mendelian and non-infectious as required.

      The abstract states "Understanding the relationship between disease and adaptation at the gene level in the human genome is severely hampered by the fact that we don't even know whether disease genes have experienced more, less, or as much adaptation as non-disease genes during recent human evolution." This seems to diminish a large body of work that has been done in this area. The authors acknowledge some of this literature in the introduction, but it would be worth toning down the abstract, which suggests there has been no work in this area. A review of this topic by Lluis Quintana-Murci1 was cited, but diminished many of the developments that have been made in the intersection of population genetics and human disease biology. Quintana-Murci says "Mendelian disorders are typically severe, compromising survival and reproduction, and are caused by highly penetrant, rare deleterious mutations. Mendelian disease genes should therefore fit the mutation-selection balance model, with an equilibrium between the rate of mutation and the rate of risk allele removal by purifying selection", and argues that positive selection signals should be rare among Mendelian disease genes. Several other examples come to mind. For example, comparing Mendelian disease genes, complex disease genes, and mouse essential genes was the major focus of a 2008 paper2, which pointed out that Mendelian disease genes exhibited much higher rates of purifying selection while complex disease genes exhibited a mixture of purifying and positive selection. This paper was cited, but only in regard to their findings of complex diseases. A similar analysis of McDonald-Kreitman tables3 was performed around Mendelian disease genes vs non-disease genes, and found "that disease genes have a higher mean probability of negative selection within candidate cis-regulatory regions as compared to non-disease genes, however this trend is only suggestive in EAs, the population where the majority of diseases have likely been characterized". Both of these studies focused on polymorphism and divergence data, which target older instances of selection than iHS and nSL statistics used in the present study (but should have substantial overlap since iHS is not sensitive to very recent selection like the SDS statistic). Regardless, the findings are largely consistent, and I believe warrant a more modest tone.

      We thank the reviewer for their recommendation. We should have written more about what is currently well known or unknown about recent adaptation in disease genes, and in more nuanced terms. Instead of writing “Understanding the relationship between disease and adaptation at the gene level in the human genome is severely hampered by the fact that we don't even know whether disease genes have experienced more, less, or as much adaptation as non-disease genes during recent human evolution”, we now write in the new abstract:

      “Despite our expanding knowledge of gene-disease associations, and despite the medical importance of disease genes, their recent evolution has not been thoroughly studied across diverse human populations. In particular, recent genomic adaptation at disease genes has not been characterized as well as long-term purifying selection and long-term adaptation. Understanding the relationship between disease and adaptation at the gene level in the human genome is hampered by the fact that we don’t know whether disease genes have experienced more, less, or as much adaptation as non-disease genes during the last ~50,000 years of recent human evolution.”

      We also toned down the start of the introduction. We now write P3L74:

      “Despite our expanding knowledge of mendelian disease gene associations, and despite the fact that multiple evolutionary processes might connect disease and genomic adaptation at the gene level, these connections are yet to be studied more thoroughly, especially in the case of recent genomic adaptation.”

      Although we agree that others have made extensive efforts to characterize older adaptation or purifying selection at disease genes compared to non-disease genes, we still believe that our results are novel and more conclusive about recent positive selection. Our initial statement was however poorly phrased. To our knowledge, our study is the first to look at the issue using specifically sweep statistics that have been shown to be robust to background selection, while also controlling for confounding factors. These sweep statistics have sensitivity for selection events that occurred in the past 30,000 or at most 50,000 years of human evolution (Sabeti et al. 2006). This is a very different time scale compared to the millions of years of adaptation (since divergence between humans and chimpanzees) captured by MK approaches.

      We also want to note that we did cite the Blekhman et al. paper for their result of stronger purifying selection in our initial manuscript. It is true however that we did not specify mendelian disease genes, which was confusing. We want to apologize again for it:

      From the earlier manuscript: “Multiple recent studies comparing evolutionary patterns between human disease and non-disease genes have found that disease genes are more constrained and evolve more slowly (lower ratio of nonsynonymous to synonymous substitution rate, dN/dS, in disease genes) (Blekhman et al., 2008; Park et al., 2012; Spataro et al., 2017)”

      “Among other confounding factors, it is particularly important to take into account evolutionary constraint, i.e the level of purifying selection experienced by different genes. A common intuition is that disease genes may exhibit less adaptation because they are more constrained (Blekhman et al., 2008)”

      It is important to remember that, as we mention in the introduction, previous comparisons did not take potential confounding factors at all into account. It is therefore unclear whether their conclusions were specific to disease genes, or due to confounding factors. We have now made this point clearer in the introduction, as we believe that we have made a substantial effort to control for confounding factors, and that it is a substantial departure from previous efforts:

      P7L201: “In contrast with previous studies, we systematically control for a large number of confounding factors when comparing recent adaptation in human mendelian disease and nondisease genes, including evolutionary constraint, mutation rate, recombination rate, the proportion of immune or virus-interacting genes, etc. (please refer to Methods for a full list of the confounding factors included).”.

      P9L253: “These differences between disease and non-disease genes highlight the need to compare disease genes with control non-disease genes with similar levels of selective constraint. To do this and compare sweeps in mendelian disease genes and non-disease genes that are similar in ways other than being associated with mendelian disease (as described in the Results below, Less sweeps at mendelian disease genes), we use sets of control non-disease genes that are built by a bootstrap test to match the disease genes in terms of confounding factors (Methods)”.

      Furthermore, we have now added a comparison of older adaptation in disease and non-disease genes using a recent version of the MK test called ABC-MK, that can take background selection and other biases such as segregating weakly advantageous variants into account. Also controlling for confounding factors, we find no difference in older adaptation between disease and non-disease genes (please see our response to main point 2).

      Therefore, contrary to the reviewer’s claim that the sweep statistics and MK approaches should have substantial overlap, we now show that it is clearly not the case. We further show that the lack of overlap is expected under our explanation of our results based on interference between recessive deleterious and advantageous variants (see our responses to main point 1 and to reviewer 1 weakness 1).

      Previous analyses were using much smaller mendelian disease gene datasets, less recent polymorphism datasets and, critically, did not control for confounding factors. We also note that reference 3 (Torgerson et al. Plos Genetics 2009) does not make any claim about recent positive selection in mendelian disease genes compared to other genes. Their dataset at the time also only included 666 mendelian disease genes, versus the ~4,000 currently known.

      In short, we do think that we have a claim for novelty, but the reviewer is entirely right that we did a poor job of giving due credit to previous important work. These previous studies deserved much better credit than no credit at all. We want to thank the reviewer from avoiding us the embarrassment of not citing important work.

      We now cite the papers referenced by the reviewer as appropriate in the introduction, based on the scope of their results:

      P3L93: “Multiple recent studies comparing evolutionary patterns between human mendelian disease and non-disease genes have found that mendelian disease genes are more constrained and evolve more slowly (Blekhman et al., 2008; Quintana-Murci, 2016; Spataro et al., 2017; Torgerson et al., 2009). An older comparison by Smith and Eyre-Walker (Smith and Eyre-Walker, 2003) found that disease genes evolve faster than non-disease genes, but we note that the sample of disease genes used at the time was very limited.”

      P5L134 “Among possible confounding factors, it is particularly important to take into account evolutionary constraint, i.e the level of purifying selection experienced by different genes. A common intuition is that mendelian disease genes may exhibit less adaptation because they are more constrained (Blekhman et al., 2008; Spataro et al., 2017; Torgerson et al., 2009),”

      There are some aspects of the current study that I think are highly valuable. For example, the authors study most of the 1000 Genomes Project populations (though the text should be edited since the admixed and South Asian populations are not analyzed, so all 26 populations are not included, only the populations from Africa, East Asia, and Europe are analyzed; a total of 15 populations are included Figures 2-3). Comparing populations allows the authors to understand how signatures of selection might be shared vs population-specific. Unfortunately, the signals that the authors find regarding the depletion of positive selection at Mendelian disease genes is almost entirely restricted to African populations. The signal is not significant in East Asia or Europe (Figure 2 clearly shows this). It seems that the mean curve of the fold-enrichment as a function of rank threshold (Figure 3) trends downward in East Asian and European populations, but the sampling variance is so large that the bootstrap confidence intervals overlap 1). The paper should therefore revise the sentence "we find a strong depletion in sweep signals at disease genes, especially in Africa" to "only in Africa". This opens the question of why the authors find the particular pattern they find. The authors do point out that a majority of Mendelian disease genes are likely discovered in European populations, so is it that the genes' functions predate the Out-of-Africa split? They most certainly do. It is possible that the larger long-term effective population size of African populations resulted in stronger purifying selection at Mendelian disease genes compared to European and East Asian populations, where smaller effective population sizes due to the Out-of-Africa Bottleneck diminished the signal of most selective sweeps and hence there is little differentiation between categories of genes, "drift noise"). It is also surprising to note that the authors find selection signatures at all using iHS in African populations while a previous study using the same statistic could not differentiate signals of selection from neutral demographic simulations4.

      We want to thank the reviewer profusely for putting us on the right track thanks to their insightful suggestion. As described in our response to reviewer 1 weakness 1, we have now shown with simulations that the interference of deleterious variants on advantageous variants is strongly decreased during a bottleneck of a magnitude similar to the Out of Africa bottlenecks experienced by East Asian and European populations. This decrease of interference is likely strong enough to not require any other explanation, even if other processes may also be at work, such as a decrease of the sweeps signals as suggested by the reviewer.

      About the Granka et al. paper, the last author of the current manuscript has already shown in a previous paper (ref) that the type of approaches used to quantify recent adaptation is likely to be severely underpowered due to a number of confounding factors, notably including comparing genic and non-genic windows that are not sufficiently far from each other to not overlap the same sweep signals. Our result are also based on much more recent and less biased sets of SNPs used to measure the sweeps statistics.

      The authors find that there is a remarkably (in my view) similar depletion across all but one MeSH disease classes. This suggests that "disease" is likely not the driving factor, but that Mendelian disease genes are a way of identifying where there are strongly selected deleterious variants recurrently arising and preventing positively selected variants. This is a fascinating hypothesis, and is corroborated by the finding that the depletion gets stronger in genes with more Mendelian disease variants. In this sense, the authors are using Mendelian disease genes as a proxy for identifying targets of strong purifying selection, and are therefore not actually studying Mendelian disease genes. The signal could be clearer if the test set is based on the factor that is actually driving the signal.

      Based on the reviewer’s comment, we have now better explained why our results are unlikely to be a generic property of purifying selection alone. As we explain in our response to main point 3, our results cannot be explained by purifying selection alone, because we match purifying selection between disease genes and the controls. Indeed, we now show with additional MK analyses and GERP-based analyses that our controls for confounding factors already account for purifying selection. This is shown by the fact that disease genes and their controls have similar distributions of deleterious fitness effects.

      In addition, we added a comparison that shows that purifying selection alone does not explain our results. Instead of comparing sweeps at disease and non-disease genes, we compared sweeps (in Africa) between the 1,000 genes with the highest density of conserved, constrained elements and other genes in the genome. If purifying selection is the factor that drives the sweep deficit at disease genes, then we should see a sweep deficit among the genes with the most conserved, constrained elements compared to other genes in the genome. However, we see no such sweep deficit at genes with a high density of conserved, selectively constrained elements (boostrap test + block randomization of genomes, FPR=0.18). See P15L424. Note that for this comparison we had to remove the matching of confounding factors corresponding to functional and purifying selection densities (new Methods P40L1131).

      Again, our results are better explained not just by purifying selection alone, but more specifically by the presence of interfering, segregating deleterious variants. It is perfectly possible to have highly constrained parts of the genome without having many deleterious segregating variants at a given time in evolution.

      The similarity across MeSH classes can be readily explained if what matters is interference with deleterious segregating variants. Because all types of diseases have deleterious segregating variants, then it is not surprising that different MeSH disease categories have a similar sweep deficit. We make that point clearer in the revised manuscript:

      P26L707: “The sweep deficit is comparable across MeSH disease classes (Figure 8), suggesting that the evolutionary process at the origin of the sweep deficit is not diseasespecific. This is compatible with a non-disease specific explanation such as recessive deleterious variants interfering with adaptive variants, irrespective of the specific disease type.”.

      One of the most important steps that the authors undertake is to control for possible confounding factors. The authors identify 22 possible confounding factors, and find that several confounding factors have different effects in Mendelian disease genes vs non-disease genes. The authors do a great job of implementing a block-bootstrap approach to control for each of these factors. The authors talk specifically about some of these (e.g. PPI), but not others that are just as strong (e.g. gene length). I am left wondering how interactions among other confounding factors could impact the findings of this paper. I was surprised to see a focus on disease variant number, but not a control for CDS length. As I understand it, gene length is defined as the entire genomic distance between the TSS and TES. Presumably genes with larger coding sequence have more potential for disease variants (though number of disease variants discovered is highly biased toward genes with high interest). CDS length would be helpful to correct for things that pS does not correct for, since pS is a rate (controlling for CDS length) and does not account for the coding footprint (hence pS is similar across gene categories).

      Based on our response to the previous point, it is clear that a high density of coding sequences, or conserved constrained sequence in general are not enough to explain our results. Furthermore, we want to remind the reviewer that we already control for coding sequence length through controlling for coding density, since we use windows of constant sizes.

      The authors point out that it is crucial to get the control set right. This group has spent a lot of time thinking about how to define a control set of genes in several previous papers. But it is not clear if complex disease genes and infectious disease genes are specifically excluded or not. Number of virus interactions was included as a confounding factor, so VIPs were presumably not excluded. It is clear that the control set includes genes not yet associated with Mendelian disease, but the focus is primarily on the distance away from known Mendelian disease genes.

      We are sorry that we were not more explicit from the start of the manuscript. We now make it clearer what the set disease genes includes or not throughout the entire manuscript, by repeating that we focus specifically on mendelian, non-infectious disease genes. By noninfectious, we mean that we excluded genes with known infectious disease-associated variants. This does not exclude most virus-interacting genes since most of them are not associated at the genetic variant level with infectious diseases. It is also important to note that the effect of virus interactions is accounted for by matching the number of interacting viruses between mendelian disease genes and controls.

      We write P29L818: “By non-infectious, we mean that we excluded genes with known infectious disease-associated variants. This does not exclude most VIPs since most of them are not associated at the genetic variant level with infectious diseases. It is important to note that the effect of virus interactions is accounted for by matching the number of interacting viruses between mendelian disease genes and controls.”

      Minor comments:

      On page 13, the authors say "This artifact is also very unlikely due to the fact that recombination rates are similar between disease and non-disease genes (Figure 1)." However, Figure 1 shows that "deCode recombination 50kb" is clearly higher in disease genes and comparable at 500kb. The increased recombination rate locally around disease genes seems to contradict the argument formulated in this paragraph.

      We apologize for the lack of precision in this sentence. What we meant is that the recombination rates are not different enough that the mentioned hypothetical artifact would be able to explain our results. We also forgot to remind at this point in the manuscript that we match recombination between disease genes and controls. We now use more precise language:

      P28L772 “The recombination rate at disease genes is also only slightly different from the recombination rate at non-disease genes (Figure 1), and we match the recombination rate between disease genes and controls.”.

      Reviewer #3 (Public Review):

      In this paper, the authors ask whether selective sweeps (as measured by the iHS and nSL statistics) are more or less likely to occur in or near genes associated with Mendelian diseases ("disease genes") than those that are not ("non-disease genes"). The main result put forward by the authors is that genes associated with Mendelian diseases are depleted for sweep signatures, as measured by the iHS and nSL statistics, relative to those which are not.

      The evidence for this comes from an empirical randomization scheme to assess whether genes with signatures of a selective sweep are more likely to be Mendelian disease genes that not. The analysis relies on a somewhat complicated sliding threshold scheme that effectively acts to incorporate evidence from both genes with very large iHS/nSL values, as well as those with weaker signals, while upweighting the signal from those genes with the strongest iHS/nSL values. Although I think the anlaysis could be presented more clearly, it does seem like a better analysis than a simple outlier test, if for no other reason than that the sliding threshold scheme can be seen as a way of averaging over uncertainty in where one should set the threshold in an outlier test (along with some further averaging across the two different sweeps statistics, and the size of the window around disease associated genes that the sweep statistics are averaged over). That said, the particular approach to doing so is somewhat arbitrary, but it's not clear that there's a good way to avoid that.

      In addition to reporting that extreme values of iHS/nSL are generally less likely at Mendelian disease genes, the authors also report that this depletion is strongest in genes from low recombination regions, or which have >5 specific variants associated with disease.

      Drawing on this result, the authors read this evidence to imply that sweeps are generally impeded or slowed in the vicinity of genes associated with Mendelian diseases due to linkage to recessive deleterious variants, which hitchhike to high enough frequencies that the selection against homozygotes becomes an important form of interference. This phenomenon was theoretically characterized by Assaf et al 2015, who the authors point to for support. That such a phenomenon may be acting systematically to shape the process of adaptation is an interesting suggestions. It's a bit unclear to me why the authors specifically invoke recessive deleterious mutations as an explanation though. Presumably any form of interference could create the patterns they observe? This part of the paper is, as the authors acknowledge, speculative at this point.

      We thank the reviewer for their comments. We are sorry that we did not provide a clear explanation of why only recessive deleterious mutations are expected to interfere more than other types of deleterious variants. This was shown by Assaf et al. (2015), and we should have stated it explicitly. The reason why recessive deleterious variants interfere more than additive or dominant ones is that they can hitchhike together with an adaptive variant to substantial frequencies before negative selection actually happens, when a significant number of homozygous individuals for the deleterious mutation start happening in the population. On the contrary dominant mutations do not make it to the same high frequencies linked to an adaptive variant, because they start being selected negatively as soon as they appear in the population.

      We now write P18L496: “In diploid species including humans, recessive deleterious mutations specifically have been shown to have the ability to slow down, or even stop the frequency increase of advantageous mutations that they are linked with (Assaf et al., 2015). Dominant variants do not have the same interfering ability, because they do not increase in frequency in linkage with advantageous variants as much as recessive deleterious do, before the latter can be “seen” by purifying selection when enough homozygous individuals emerge in a population (Assaf et al., 2015).”

      We have also confirmed with SLiM forward simulations that recessive deleterious variants interfere with adaptive variants much more than dominant ones (Table 1).

      I'm also a bit concerned by the fact that the signal is only present in the African samples studied. The authors suggest that this is simply due to stronger drift in the history of European and Asian samples. This could be, but as a reader it's a bit frustrating to have to take this on faith.

      We thank the reviewer for pointing out this issue with our manuscript. We have now shown, as detailed above in our response to main point 1, reviewer 1 weakness 1, that a weaker sweep deficit at disease genes in Europe and East Asia is an expected feature under the interference explanation, due to the weakened interference of recessive deleterious variants during bottlenecks of the magnitude observed in Europe and East Asia. We therefore believe that these new results strengthen our previous claim regarding the role interference between deleterious and advantageous variants. We want to thank the reviewer for forcing us to examine the difference between results in Africa and out of Africa, as the manuscript is now more consistent and our results substantially better explained.

      There are other analyses that I don't find terribly convincing. For example, one of the anlayses shows that iHS signals are no less depleted at genes associated with >5 diseases than with 1 does little to convince me of anything. It's not particularly clear that # of associated disease for a given gene should predict the degree of pleiotropy experienced by a variant emerging in that gene with some kind of adaptive function. Failure to find any association here might just mean that this is not a particularly good measure of the relevant pleiotropy.

      We agree with the reviewer that the number of associated disease may not be a good measure of pleiotropy. Unfortunately to our knowledge there is currently no good measure of gene pleiotropy in human genomes. Given that the evidence in favor of interference of deleterious variants is now strengthened, we have chosen to remove this analysis from the manuscript. As we now explain throughout the manuscript, pleiotropy is an unlikely explanation in the first place because of the fact that disease genes have not experienced less long-term adaptation (see the details on our new MK test results in the response to main point 2).

      P16L447: “We find that overall, disease and control non-disease genes have experienced similar rates of protein adaptation during millions of years of human evolution, as shown by very similar estimated proportions of amino acid changes that were adaptive (Figure 5A,B,C,D,E). This result suggests that disease genes do not have constitutively less adaptive mutations. This implies that processes stable over evolutionary time such as pleiotropy, or a tendency to overshoot the fitness optimum, are unlikely to explain the sweep deficit at disease genes.”.

      A last parting thought is that it's not clear to me that the authors have excluded the hypothesis that adaptive variants simply arise less often near genes associated with disease. The fact that the signal is strongest in regions of low recombination is meant to be evidence in favor of selective interference as the explanation, but it is also the regime in which sweeps should be easiest to detect, so it may be just that the analysis is best powered to detect a difference in sweep initiation, independent of possible interference dynamics, in that regime.

      We thank the reviewer for stating these important alternative explanations that needed more attention in our manuscript. In our response to main point 2 above, we explain that higher statistical power in low recombination regions is unlikely to explain our results alone, because we also show that the sweep deficit is substantially present not only in low recombination regions, but also requires the presence of a higher number of disease variants. We also describe in our response to main point 2 how our new MK-test results on long-term adaptation make it very unlikely that mendelian disease genes experience constitutively less adaptation. We want to thank the reviewer again for pointing out this issue with our manuscript, since it was indeed an important missing piece.

    1. Author Response

      Reviewer #2 (Public Review):

      (1) Much of the cited literature that is used to make the case for their hypothesis is very old and actually refers to active HIV infection and patient studies prior to ART. Also, the literature they cite regarding the role of H2S as an antimicrobial agent seem to be limited to tuberculosis infection.

      We have revised the list of literature and included more relevant references post- ART era. Recently, the antimicrobial role of H2S is comprehensively examined in the context of tuberculosis. Given the close association of TB with HIV, we thought our study is very timely and essential. However, we would like to point out that the references showing the effect of H2S on infection caused by respiratory viruses are included in the manuscript (7-9). Further, recent findings showing the influence of H2S in the context of SARS-CoV2 infection are also included in the revised manuscript

      (2) The choice of the latently infected model cell lines is rather unfortunate. There are much better defined models out there these days than J1.1 or U1 cells, such as the J-LAT cells from the Verdin lab or the various reporter cell lines generated by Levy and co-workers. In particularly, U1 cells should not be considered as latently infected, as the virus has a defect in the Tat/TAR axis and is mostly just transcriptionally attenuated. It is unclear why the authors only use J-LAT cells for one of the last experiments

      As suggested by the reviewer, we have generated new data using J-LAT cells in the revised manuscript. First, we confirmed that PMA-mediated HIV-1 reactivation in J-LAT cells is associated with the down-regulation of cbs, cth, and mpst transcripts (Figure 1-figure supplement 1C-D in the revised manuscript). Additionally, we have performed several other mechanistic experiments in J-LAT cells to validate the data generated in U1 (see below response to # 3).

      (3) It is further unclear why the authors perform most of the experiments using U1 cells, which are considered promonocytic, but in the end seek to demonstrate the influence of H2S on latent HIV-1 infection in CD4 T cells. Performing all experiments in J1.1 or better J-LAT cells would have seemed more intuitive.

      The choice of U1 was based on our earlier studies showing that U1 cells uniformly recapitulate the association of redox-based mechanisms and mitochondrial bioenergetics with HIV-latency and reactivation (10-12). We have validated key findings of U1 cells in J1.1 and J-Lat cell lines. We genetically and chemically silenced the expression of CTH in J-Lat cells and examined the effect on HIV-1 reactivation. Consistent with U1 and J1.1, genetic silencing of CTH using CTH-specific shRNA (shCTH) reactivated HIV-1 in J-Lat (Figure 2-figure supplement 1F-G in the revised manuscript). Supporting this, pre-treatment of J-Lat with non-toxic concentrations of a well-established CTH inhibitor, propargylglycine (PAG) further stimulated PMA-induced HIV-1 reactivation (Figure 2-figure supplement 1H-I in the revised manuscript). Altogether, using various cell line models of HIV-1 latency, we confirmed that endogenous H2S biogenesis counteracts HIV-1 reactivation.

      (4) The authors suggest that H2S production would control latent HIV-1 infection and reactivation. Regarding the idea that CBS, CTH or possibly MPST would control latent infection as a function of their ability to produce H2S from different sources, there are several questions. First, if H2S is the primary factor, why would the presence of e.g. MPST not compensate for the reduction of CTH? Second, why would J1.1 and U1 cells both host latent HIV-1 infection events, however, their CBS/CTH/MPST composition is completely different? Third, natural variations in CTH expression caused by culture over time are larger than variations caused by PMA activation.

      These questions are important and complex. CBS, CTH, and MPST produce H2S in the sulfur network. CBS and CTH reside in the cytoplasm, whereas MPST is mainly involved in cysteine catabolism and is mitochondrial localized. The lack of compensation of CTH by MPST could be due to the compartmentalization of their activities. Furthermore, CTH and CBS activities are regulated by diverse metabolites, including heme, S-adenosyl methionine (SAM), and nitric oxide/carbon monoxide (NO/CO). In contrast, MPST activity responds to cysteine availability. How substrates/cofactors availability and enzyme choices are regulated in the cellular milieu of J1.1 and U1 is an interesting question for future experimentation.

      Moreover, the tissue-specific expression/activity of CBS and CTH dictates their relative contributions in H2S biogenesis and cellular physiology (13). Some of these factors are likely responsible for differential expression of CBS, CTH, and MPST in J1.1 and U1 cells. Regardless of these concerns, viral reactivation uniformly reduces the expression of CTH in U1, J1.1, and J-Lat. While we cannot completely rule out natural variations in CTH expression over prolonged culturing, in our experimental setup CTH remained stably expressed and consistently showed down-regulation upon PMA treatment as compared to untreated conditions.

      (5) Also, the statement that H2S production as exerted per loss of CTH would control reactivation is not supported by the kinetic data. In latently HIV-1 infected T cell lines or monocytic cell lines, PMA-mediated HIV-1 reactivation at the protein level is usually almost complete after 24 hours, but at this time point the difference between e.g. CTH levels only begins to appear in U1 cells. The data for J1.1. are even less convincing.

      We have performed the kinetics of p24 production and CTH in U1 cells. We showed that the levels of p24 gradually increased from 6 h and kept on increasing till the last time point, i.e., 36 h post-PMA-treatment (Fig. 2D in the revised manuscript). The p24 ELISA detected a similar kinetics of p24 increase in the cell supernatant (Fig. 2E in the revised manuscript). The CTH levels show reduction at 24 h and 36 h. Based on these data, we report that HIV-1 reactivation is associated with diminished biogenesis of endogenous H2S. We have not made any claims that depletion of CTH precedes HIV reactivation. However, our CTH knockdown data clearly showed that diminished expression of CTH reactivates HIV-1 in the absence of PMA, which is consistent with our hypothesis that H2S production is likely to be a critical host component for maintaining viral latency.

      (6) Figure 2F. PMA is known to induce an oxidative stress response, however, in the experiments the data suggest that PMA results in a downregulated oxidative stress response. Maybe the authors could explain this discrepancy with the literature. In fact, both shRNA transductions, scr and CTH-specific seem to result in a lower PMA response.

      In our experiment, PMA treatment for 24 h results in down-regulation of oxidative stress genes. However, the effect of PMA on the oxidative stress responsive genes is time-dependent. In our earlier publication, we showed that 12 h PMA treatment induces oxidative stress responsive genes in U1 cells (12), whereas at 24 h, the expression of genes is down-regulated (10). Genetic silencing of CTH resulted in elevated mitochondrial ROS and GSH imbalance, which is in line with a further decrease in the expression of oxidative stress responsive genes as compared to PMA alone. As a consequence, PMA-treatment of U1-shCTH induced HIV-1 reactivation, which supersedes that stimulated by PMA or shCTH alone.

      (7) Given that the others in subsequent experiments use GYY4137, which is supposed to mimic the increased release of H2S, the authors should have definitely included experiments in which they would overexpress CTH, e.g. by retroviral transduction. Specifically in U1 cells, which seemingly do not express CBS, overexpression of CBS should also result in a suppressed phenotype

      We have explored the role of elevated H2S levels using GY44137. Treatment with GYY4137 suppressed HIV reactivation in multiple cell lines and primary CD4+ T cells. As suggested by the reviewer, overexpression of CTH could be another strategy to validate these findings. However, since the transsulfuration pathway and active methyl cycle are interconnected and share metabolic intermediates (e.g., homocysteine), overexpression of CTH could disturb this balance and may lead to metabolic paralysis. Owing to these potential limitations, we used a slow releasing H2S donor (GYY4137) to chemically complement CTH deficiency during HIV reactivation. We thank the reviewer for this comment.

      (8) Figure 4F: The authors need to explain how they can measure a 4-fold gag RNA expression change in untreated cells. Also, according to Figure 4A, 300 µM GYY produces much less H2S than 5mM, yet the suppressive effect of 300 µM GYY is much higher?

      The four-fold-expression in untreated cells is likely due to leaky control of viral transcription in J1.1 cells (14-16). However, to avoid confusion, we have replotted the results by normalizing the data generated upon PMA mediated HIV reactivation with the PMA untreated cells in the revised manuscript (Figure 4F in the revised manuscript). The suppressive effect of GYY4137 at the lower concentration is intriguing but consistent with the findings that high and low concentrations of H2S have profound and distinct effects on cellular physiology (3,17). One possibility is that the high concentration of H2S induces mitochondrial sulfide oxidation pathway to avert toxicity. This might modulate mitochondrial activity and ROS, resulting in the suppression of GYY4137 effect. Consistent with this, higher concentrations of H2S have been shown to cause pro-oxidant effects, DNA damage and genotoxicity (3,18). We have discussed these possibilities in the revised manuscript

      (9) Initially, the authors argue "that the depletion of CTH could contribute to redox imbalance and mitochondrial dysfunction to promote HIV-1 reactivation"(p. 9). Less CTH would suggest less produced H2S. However, later on in the manuscript they demonstrate that addition of a H2S source (GYY4137) results in the suppression of HIV-1 replication and supposedly HIV-1 reactivation. This is somewhat confusing.

      We show that depletion of endogenous H2S by diminished expression of CTH (U1-shCTH) resulted in higher mitochondrial ROS and GSH/GSSG imbalance. Both of these alterations are known to reactivate HIV-1 and promote replication (10,11,19). The addition of GYY4137 chemically compensated for the diminished expression of CTH, and prevented HIV-1 reactivation in U1-shCTH. These events are expected to suppress HIV-1 replication and reactivation. We have made this distinction clear in the revised manuscript.

      (10) CTH, or for that matter CBS or MPST do not only produce H2S, however, they also are part of other metabolic pathways. It would have been interesting and important to study how these metabolic pathways were affected by the genetic manipulations and also how the increased presence of H2S (GYY4137) would affect the metabolic activity of these enzymes or their expression.

      We fully agree with the reviewer. In fact, our NanoString data show that upon CTH knockdown (U1-shCTH), MPST levels were down-regulated and CBS remained undetectable (Fig. 2F in the revised manuscript). Additionally, GYY4137 treatment induced the expression of CTH but not MPST upon PMA addition (Fig. 5A in the revised manuscript). We have incorporated these findings in the revised manuscript. Given that CBS and CTH catalyzed at least eight H2S generating steps and two cysteine-producing reactions, the modulation of CTH by HIV is likely to have a widespread influence on transsulfuration pathway and active methyl cycle intermediates. Our future strategies are to generate a comprehensive understanding of sulfur metabolism underlying HIV latency and reactivation. These experiments require multiple biochemical and genetic technologies with appropriate controls. We hope that the reviewer would agree with our views that these experiments should be a part of future investigation. We thank the reviewer for this comment.

      (11) H2S has been reported to cause NFkB inhibition by sulfhydration of p65; as such, the findings here are not particularly novel or surprising. Also, H2S induced sulfhydration is rather not targeted to a specific protein, let alone a HIV protein, making this approach a very unlikely alternative to current ART forms.

      We believe that NF-kB inhibition is not the only mechanism by which H2S exerts its influence on HIV latency. Recent studies point towards the importance of the Nrf2-Keap1 axis in sustaining HIV-latency (20). Our data suggest an important role for Nrf2-Keap1 signaling in mediating the influence of H2S on HIV latency. Additionally, recruitment of an epigenetic silencer YY1 is also affected by H2S. Interestingly, YY1 activity is modulated by redox signaling (21), suggesting H2S could be an important regulator of YY1 activity in HIV-infected cells. We have so far, no evidence for viral proteins targeted by H2S. However, experiments to examine global S-persulfidation of host and HIV protein are ongoing in the laboratory to fill this knowledge gap. Lastly, our findings raise the possibility of exploring H2S donors with the current ART (not as an alternate to ART) for reducing virus reactivation. We have tone down the clinical relevance of our findings.

      (12) The description of the primary T cell model used to generate the data in Figure 6 is slightly misleading. Also, the idea of this model was originally to demonstrate that "block and lock" by didehydro-cortistatin is possible. In this application, the authors did not investigate whether GYY4137 would actually induce a HIV "block and lock" over an extended period of time.

      As suggested by the reviewer, we have cited the didehydro-cortistatin studies as the basis of our strategy. Our idea was to adapt the primary T cell model to begin understanding the role of H2S in blocking HIV rebound. Our results indicate the future possibility of investigating GYY4137 to lock HIV in deep latency for an extended period of time. However, comprehensive investigation would require long-term experiments and samples from multiple HIV subjects. In the current pandemic times with overburdened Indian clinical settings, we cannot plan these experiments. However, we hope our data form a solid foundation for HIV researchers to perform extended “block and lock” studies using H2S donors.

      (13) However, the authors never provide evidence that endogenous H2S is altered in latently HIV-1 infected cells (which may actually be an impossible task). By the end of the manuscript, the authors have not provided clear evidence that the effects of e.g. CTH deletion would be mediated by the production of H2S, and not by another function of the enzyme. Similarly, the inability of stimuli to trigger efficient HIV-1 reactivation following the provision of unnaturally high levels of H2S is not surprising given reports on the effect of GYY4137 as anti-inflammatory agent and suppressor NF-kB activation. Unless the authors were to demonstrate a true "block and lock" effect by GYY4137 the data will likely have limited impact on the HIV cure field.

      It's difficult to measure H2S levels in the latently infected primary cells due to the assay's sensitivity and the insufficient number of cells latently infected with HIV-1. However, in the revised manuscript we have clearly shown that cysteine levels are not affected by CTH depletion and cysteine deprivation does not reactivate HIV-1. These results indicate that the effects of CTH depletion are likely mediated by H2S. This is consistent with our data showing that GYY4137 specifically complement CTH deficiency and blocks HIV-1 reactivation in U1-shCTH. Further, we carried in-depth investigation to show that the effect of GYY4137 is not due to impaired activation of CD4+ T cells.

      Lastly, since CTH catalyzed multiple reactions during H2S production, we cannot rule out the effect of other metabolites in this process. However, we think that this is outside the scope of the present study. Our study focuses on understanding of how H2S modulates redox, mitochondrial bioenergetics, and gene expression in the context of HIV latency. These understandings are likely to positively impact future studies exploring the role of H2S on HIV cure.

  4. Local file Local file
    1. Unlike dictation, this technique is ideally suited forindividual practice

      This means this activity can be done at home.

      It's important to make sure that students' do it in the right way, instead of just copying without delay.

    Annotators

    1. Ethereal is a fake SMTP service, mostly aimed at Nodemailer and EmailEngine users (but not limited to). It's a completely free anti-transactional email service where messages never get delivered. Instead, you can generate a vanity email account right from Nodemailer, send an email using that account just as you would with any other SMTP provider and finally preview the sent message here as no emails are actually delivered. So far Ethereal has caught 46 654 emails sent using Ethereal testing accounts.

    1. I mean, I’m quite sure I still want to become an auror, but how will reality ever measure up to my expectations now? When I’ve imagined my future with the DMLE, I’ve always envisioned us doing it together, him and me, come rain or shine. Now what? What if I’m partnered with a jerk, or a total bore? Or even worse, a fawning fan… But it’s not like I have that many options, do I? I mean, what else is there for me to do? Fighting dark wixen is the only thing I’m good for, the only career I’ve ever considered. I don’t know, I guess I just need some time to adjust to the idea of doing it without him.

      just… yeah. same.

    1. For a Luhmannian Zettelkasten (Antinet), and for its output, we can turn to Luhmann's books. Also, there's my writing pieces from my book (which I've shared here and there). Everything I've put out started as notes in my Antinet.I think a lot of people in this community are still in the early stages. Until very recently with the introduction of my YouTube videos, there weren't any good resources for building an analog Zettelkasten.Right now people are in the incipient stages of developing knowledge with it. I think it will take some time (another 8-12 months) before people can provide links to their output (their books).Heck even myself, I can't provide a link to the Antinet Book yet because it's still being edited. The draft was finished around May.Soon I think there will be less hand-waving and more examples of output (books/dissertations) using the Antinet.You're spot on in your main point: output is the goal. The Antinet Zettelkasten is the airplane, the destination is the output.Apart from this, this community has some fantastic practitioners. Each person seems to be applying the fundamental component and then innovating on top of that in their own way.

      Scott, I'm not looking for outputs themselves (there are many of these floating about, though they're infrequently seen or talked about in our spaces), but more the unseen work between having a deck of cards and how one pulls them out, potentially orders them around, and physically manufactures the text itself. I'm looking for the (likely) droll videos of the enthusiastic zettelmacher(in) crawling around on the floor moving cards about to actually form the content. Or photos or video of their living room covered with several hundred cards ordering them into the form of the ultimate output which they've already written down, but just need to put into a reasonable logical linear form. What do these look like in digital and analog form?

    1. Well, this was a true early morning treat!You reeeeally botched that one. Like 180 degrees misinterpreted it.That thread is about how Luhmann developed a personal approach that worked for him (as we all do and should), and that there is no one way to work/do a zettelkasten. Ie. We all must (and inevitably will) interpret Luhmann's take on zettelkasten method (and any other tools/method/etc we encounter) in light of what our needs are.What's super dope, is that my whole jam in this ZK world is about showing the thread/lineage of these techniques and helping people specifically wrestle with some of the principles and practices Luhmann employed so that in the end they can apply them in whatever way they see fit. And yet, somehow....you actually miss that?Also, this.... (you)"We approach these methods from such a top down manner, in part, because our culture has broadly lost the thread of how these note taking practices were done historically. Instead of working with something that has always existed and been taught in our culture, and then using it to suit our needs, we're looking at it like a new shiny toy or app and then trying to modify it to make it suit our needs."... Is this....(me)"We're coming at [zettelkasten] top-down. We're appropriating something and trying to retrofit it in a desire to "be better." In doing so, we're trying "clean it up a bit."I'm critiquing this approach 😂 I'm saying we come at it top-down bc we see it as a reified object (which is incorrect) that is set in stone, when in fact those who present the "one true way" are actually presenting a "cleaned up version" of Luhmann's very personal approach and calling it "official." Again, I'm critiquing that! I am, by design and punk ethos, kinda against "official."Silly, dude. The whole thread is about not looking at it as a "shiny new toy" and seeing it as a more fluid aspect of note-taking and personal practice. It's about recognizing that the way to recreate Luhmann is to be flexible, interpret these methods for yourself. Why? Bc that's exactly what Luhmann did."Let the principles and practices guide your zettelkasten work. Throw them in a box with your defined workflow issues. Let them hash it out. Shake the box and let them tell you the "kind" of zk you should be working with." (thread the day before the above mentioned)Also, and you're gonna love this....Here's you above...."People have been using zettelkasten, commonplace books, florilegium, and other similar methods for centuries, and no one version is the "correct" one."And here's me....."The most well-known slip-boxes in the world have been employed by writers in service of their writing. Variations of the system date back to the 17th c., [3] and modern writers such as, Umberto Eco, Arno Schmidt, and Hans Blumenberg are all known for employing some version of the slip-box to capture, collect, organize, and transform notes into published work. Of course, today, the most famous zettelkasten is the one used...."Sound familiar? It's me citing you, ya dum dum 😂 Footnote numero tres....https://writing.bobdoto.computer/zettelkasten-linking-your-thinking-and-nick-milos-search-for-ground/Such a funny thing to see this fine Friday morning! ☀

      Sadly I think we're talking past each other somehow; I broadly agree with all of your original thread. Perhaps there's also some context collapse amidst our conversations across multiple platforms which doesn't help.

      Maybe my error was in placing my comment on your original thread rather than a sub branch on one of the top several comments? I didn't want to target anyone in particular as the "invented by Luhmann myth" is incredibly wide spread and is unlikely to ever go away. It's obvious by some of the responses I've seen from your thread here in r/antinet that folks without the explicit context of the history default to the misconception that Luhmann invented it. This misconception tends to reinforce the idea that there's "one true way" (the often canonically presented "perfect" Luhmann zettelkasten, rather than the messier method that he obviously practiced in reality) when, instead, there are lots of methods, many of which share some general principles or building blocks, but which can have dramatically different uses and outcomes. My hope in highlighting the history was specifically to give your point more power, not take the opposite stance. Not having the direct evidence to the contrary, you'll noticed I hedged my statement with the word "seems" in the opening sentence. I apologize to you that I apparently wasn't more clear.

      I love your comparison of LYT and zettelkasten by the way. It's reminiscent of the sort of comparison I'm hoping to bring forth in an upcoming review of Tiago Forte's recent book. His method—ostensibly a folder based digital commonplace book, which is similar to Milo's LYT—can be useful, but he doesn't seem to have the broader experience of history or the various use cases to be able to advise a general audience which method(s) they may want to try or for which ends. I worry that while he's got a useful method for potentially many people, too many may see it and his platform as a recipe they need to follow rather than having a set of choices for various outcomes they may wish to have. Too many "thought leaders" are trying to "own" portions of the space rather than presenting choices or comparisons the way you have. Elizabeth Butler is one of the few others I've seen taking a broader approach. A lot of these explorations also means there are multiple different words to describe each system's functionality, which I think only serves to muddy things up for potential users rather than make them clearer. (And doing this across multiple languages across time is even more confusing: is it zettelkasten, card index, or fichier boîte? Already the idea of zettelkasten (in English speaking areas) has taken on the semantic meaning "Luhmann's specific method of keeping a zettelkasten" rather than just a box with slips.)

    1. Henter had been driving on the right side of the road, just as he did back home. Instinctively, he swerved right. But the other driver, faithful to his own British instincts, swerved left

      It's so clear that this inconsistency is a public health hazard and incurs significant mental load - no different from the imperial system of measurement. Why is this okay?

    1. Author Response

      Reviewer 1

      Strengths:

      This manuscript combines experimental, exploratory, and observational methods to investigate the big question in innovation literature--why do some animals innovate over others, and how information about innovations spread. By combining a variety of methods, the manuscript tackles this question in a number of ways, and finds support for previous work showing that animals can learn about foods via social olfactory inspection (i.e., muzzle to muzzle contact), and also presents data intended to investigate the role of dispersing animals in innovation and information spread.

      Using data from a previously-published experiment, the manuscript illustrates how investigators can numerous interesting questions while limiting the disturbances to wild animals. The manuscript's attempt at using exploratory analysis is also exciting, as exploratory analyses provide a useful tool for behavior research-indeed, Tinbergen insisted that behavior must first be described.

      Weaknesses:

      The manuscript's introduction is a bit unclear as to how the fact that dispersing males may be an important source of information ties to innovations in response to disruptions due to climate change, humans, or new predators, if at all. An introduction regarding the role of dispersed animals in introducing novel behaviors and social transmission would better prepare readers for the questions presented in the manuscript. As it stands now, the manuscript only provides one sentence discussing the theoretical relevance of investigating the role of dispersing animals in innovations.

      We have added some information about this to the introduction (lines 66 – 69 and 121-123) and maintain our discussion of it in the discussion.

      Additionally, while the manuscript attempts to use exploratory analysis, it does not provide enough theoretical background as to why certain questions were asked while the data were explored. While the discussion provides some background as to the role of dispersing males in innovation, the introduction provides little background, and thus does not properly frame the issue. It is unclear how dispersing males became of interest and why readers should be interested in them. As the manuscript reads now, it may be that dispersing males became interesting only as a result of the exploratory analysis-except that the predictions explicitly mentions dispersing males. Thus, manuscript at present makes it difficult to know if the questions surrounding immigrant males resulted from the exploratory analysis, or was a question the analyses were intended to answer from the beginning. If this question only came out after first reviewing the results, then this needs to be made clear in the introduction. I see no issue with reporting observations that were the result of investigations into earlier results, but it needs to be reported in a way that can be replicated in future research-I need to know the decision process that took place during the data exploration.

      We hope this is clearer from our new research aims (lines 125-173)

      The manuscript never clearly defines what counts as an immigrant male; presumably, in this species, all adult males in the group should be immigrants, as females are the philopatric sex. Sometimes, the manuscript uses "recently" to modify immigrant males, but doesn't define exactly what counts as recent, except to say that the males that innovated were in their respective groups for fewer than 3 months, but never explains why three months should be an important distinction in adult male tenure.

      We realise how we wrote about this previously was not clear and perhaps misleading. We noticed that the males that innovated had been in the group for less than three months. We do not know if this is necessary for them to innovate or not. We also added to the discussion a description of the male in AK19 who had been in the group for four months and did no innovate – as he had many other traits which we would expect to exclude him from criteria for innovation (e.g. very old, post-prime, and inactive – died within months of the experiment).

      Due to the above weaknesses, the provided predictions are a bit murky. It is not clear how variation between groups in accordance with who innovated, or initiated eating a novel food, or demographics is related to the central issue. The manuscript does contribute to the literature by looking at changing rates of muzzle contact over exposure to a novel food source, and provides a good extension of previous findings; that, if muzzle contacts help animals learn about new foods, then rates of muzzle contacts involving novel foods should decrease as animals become familiar with the food. However, this point isn't explicit in the manuscript.

      This is now addressed in the new aims paragraph (lines 125-173)

      Finally, it is also unclear as to why changing rates of muzzle contact AND whether certain individual level variables like knowledge, sex, age, and/or rank might influence muzzle contacts during opportunities to innovate.

      We are not sure exactly what the reviewer means here, but hope that the substantial revisions we have made now address their concern.

      As for the methods, the manuscript doesn't provide enough details as to why certain decisions were made. For example, no reason is given as to why only the first four sessions after an animal ate were considered, why the first three months of tenure (but not four, as seen on one group that didn't innovate) was considered to be a critical time for which immigrant males may innovate, why (including the theoretical reasons) the structure of models for one analysis was changed (dropping one variable, adding interactions), or even how the beginning and ending of a trial was decided, despite reporting that durations varied widely,-from 5 minutes to two hours.

      Please see: above about the male with 4 month tenure; and top of document for description of our updated models.

      The discussion contains results that are never elsewhere presented in the manuscript- (2a) Individual variation in uptake of a novel food according to who ate first).

      It was just an error in the sub-title in the discussion – this is now amended. But all the other corresponding details were already there, in the list of research aims in the introduction and in the results as well.

      Finally, the largest issue with the manuscript is that its results are not as convincing as the conclusions made. An issue with all the analyses is that some grouping variables in some analyses but not others despite the fact that all of the analyses contain multiple groups (necessitating group as a grouping variable) and multiple observations of the same individuals (i.e., immigrant males tested in multiple groups, necessitating animal identity as a random effect), and not accounting for individual exposure to the experiment when considering whether animals ate the food in the allotted period (an important consideration given the massive differences in trial times), making these results difficult to interpret in their current forms. As for the results regarding muzzle contact, the analyses has a number of issues that make it difficult to determine if the claims are supported. These issues include not explaining why rank calculated a year before the experiments took place was valid or if rank was calculated among all group members or within age and sex classes, not explaining how rank was normalized, and not conducting any kind of formal model comparisons before deciding the best model.

      Mostly addressed at top of this document. Regarding rank calculations: rank was not calculated a year before the experiments, it was calculated using a year’s worth of data up to the beginning of the experiments – and ranks were calculated among all group members - we have made this clearer in the methods. We also explained our method of normalisation, and noted that it was an error to include non-normalised rank in one of the models – this has now been rectified

      As for the results regarding immigrant males and innovation, little is done to help the fact that these results are from very few observations and no direct analyses. It is possible that something that occurs relatively often but in small sample sizes, like dispersing animals, could have immense power in influencing foraging traditions, and observation is a necessary step in understanding behavior. However, the manuscript doesn't consider any alternative hypotheses as to why it found what it found. No other possible difference between the groups was considered (for example, the groups that rapidly innovated appear to be quite smaller than the groups that did), making the claim that immigrant males were what allowed groups to innovate unconvincing. This is particularly true given that some groups in this study population have experimental histories (though this goes unmentioned in the current manuscript), which likely influenced neophobia-especially given work by the same research group showing that these animals are more curious compared to their unhabituated counterparts.

      We have added more discussion of alternative hypotheses to the discussion (line numbers mentioned above).

      Regarding the comment about rapid innovation in smaller groups – we are not sure what the reviewer means here – all groups except BD were similar sized. The second largest group, NH, had one of the quickest innovations and a smaller group (KB) innovated only at the third exposure. Unless the reviewer instead refers to the spread of the innovation here? This is also not quite what we see in the data – BD is the largest group and one of the fastest to spread, and KB is the smallest group and the slowest to spread. Regarding groups experimental histories, all the five studied groups have already been used in field experiments. The group (LT) with the least experimental history was the one having the greatest proportion of individuals eating the novel food at the first and over the four exposures (see Fig. 2) while one of the groups with the most experimental history (NH) was one having a smaller proportion of individuals eating the food across the experiment. This is discussed in the discussion (lines 370-380).

      Reviewer 2

      I have separated my issues with the manuscript into three sub-headings (Conceptual Clarity, Observational Detail and Analysis) below.

      1) Conceptual clarity

      There are a number of areas where it would greatly benefit the manuscript if the authors were to revisit the text and be more specific in their intentions. At present, the research questions are not always well-defined, making it difficult to determine what the data is intended to communicate. I am confident all of these issues could be fixed with relatively minor changes to the manuscript.

      For example, Line 104: Question 1 is not really a question, the authors only state that they will "investigate innovation and extraction of eating the food", which could mean almost anything.

      We re-wrote the research questions paragraph and results with this advice in mind – hope it is clearer now. We keep the innovation part just descriptive and hope this is less problematic now.

      Question 2a (line 98) is also very vague in it's wording, and I'm left unclear as to what the authors were really interested in or why. This is not helped by Line 104 which refuses to make predictions about this research question because it is "exploratory". Empirical predictions are not simply placing a bet on what we think the results of the study will be, but rather laying out how the results could be for the benefit of the reader. For instance, if testing the effects of 10 different teaching methods on language acquisition-rate: Even if we have no a priori idea of which method will be most effective, we can nevertheless generate competing hypotheses and describe their corresponding predictions. This is a helpful way to justify and set expectations for the specific parameters that will be examined by the methods of the study. In fact, in the current paper, the authors in fact had some very clear a priori expectations going into this study that immigrant males would be vectors of behavioural transmission (clear that is from the rest of the introduction, and the parameters used in their analysis, which were not chosen at random).

      We have now updated the whole research aims (lines 125-173).

      The multiple references to 'long-lived' species in the abstract (line 16 and introduction (39, 56) is a bit confusing given the focus of this study. Although such categorisations are arbitrary by nature (a vervet is certainly long-lived compared to a dragonfly), I would not typically put vervet monkeys (or marmosets, line 62) in the same category as apes (references 8 and 9) or humans (line 62) in this regard.

      When we use “long-lived” in the introduction, we explain that we mean animals with slow generational turnover for whom genetic adaptation is relatively slow – too slow to adapt to very rapid environmental change. Within the distinctions the reviewer makes here, we feel that vervets and marmosets are much more similar to apes than to dragonflies etc. in this respect… and we think making the comparisons that we do are valid in this context (though we do agree that for other reasons we would not find it appropriate). We have modified the sentence in the introduction (line 4042) and hope this is clearer now. The study in reference 9 is about crop-raiding, which is something vervets can learn to do within one generation too. In addition, reference 8 is used as it was one of the earlier and long-standing definitions of innovation which we are using here – we are not comparing vervets to apes directly, but we do not think a different definition of innovation is required.

      This contributes a little towards the lack of overall conceptual focus for the manuscript: beginning in this fashion suggests the authors are building a "comparative evolutionary origins" story, hinting perhaps at the phylogenetic relevance of the work to understanding human behaviour, but the final paragraph of the study contextualises the findings only in terms of their relevance to feeding ecology and conservation efforts. I would recommend that the authors think carefully about their intended audience and tailor the text accordingly. This is not to say that readers interested in human evolution will not be interested in conservation efforts, but rather that each of these aspects should be represented in each stage of the manuscript (otherwise - conservationists may not read far into the Introduction, and cultural evolution fans will be left adrift in the Conclusion).

      We agree that the line running through the whole paper needed to be clearer and have tried to improve this.

      2) Observational detail

      There are a number of areas of the manuscript which I found to be lacking in sufficient detail to accurately determine what occurred in these experimental sessions, making the data difficult to interpret overall. All of this additional information ought to be readily available from the methods used (the experiments were observed by 3-5 researchers with video cameras (line 341)) and is all of direct relevance to the research questions set out by the authors.

      We added more details about the experiment in the method section.

      While I appreciate that it will take quite a bit of work to extract this information, I am certain that it would greatly improve the robustness and explanatory power of this study to do so.

      The data on who was first to innovate/demonstrate successful extraction of the food in each group (Question 1) and subsequent uptake (Question 2), as well as the actual mechanism by which that uptake occurred (the authors strongly imply social learning in their Discussion, but this is never directly examined) is difficult to interpret based on the information presented. Some key gaps in the story were:

      We did not intend to claim that muzzle contact was the specific mechanism by which individuals learned to extract and eat peanuts – we rather use this experiment to evaluate the function of muzzle contact in the presence of a novel food.

      We did not record observation networks in all groups during experiments and cannot obtain accurate ones from all our videos – we hope it is clearer in our text now. Our group’s previous study (Canteloup et al., 2021) already shows social transmission of the opening techniques using data of two of our groups (NH and KB).

      • Which/how many individuals encountered the food and in what order? I.e., were migrants/innovators simply the first to notice the food?

      No, and we have now added some info about other individuals approaching the box and inspecting the peanuts before innovation took place

      • Did any individuals try and fail to extract the food before an "innovator" successfully demonstrated?
      • How many tried and failed to extract the nuts before and after observing effective demonstrators?

      We have added the number of individuals that inspected the peanuts (visually and with contact)

      • Were individuals who observed others interact with the food more likely to approach and/or extract it themselves?
      • Did group-members use the same methods of extraction as their 'innovators'?

      Yes – this is the topic of Canteloup et al. 2021 – and these data are not presented again here. That study was on two of the groups presented here (KB and NH), and with up to 10 exposures in each of those groups and present a fine-grained analysis of peanuts opening techniques used by monkeys. We hope this is clearer now in the text where we refer to this paper.

      • How many tried and succeeded without having directly observed another individual do so (i.e. 'reinvention' as per Tennie et al.)?

      For this, and the above points: We did not record an observation network for the groups added in this study and are not able to answer this – it is not the focus of this study. For this reason, we do not make claims in this line in the present study, and are cautious with our social learning related language. Whilst we examine the role of muzzle contact in acquiring information about a novel food, we do not expect this behaviour to be a necessary prerequisite in being able to extract and eat this food – indeed many individuals who learned to eat did not perform muzzle contacts. This aspect of the study is about using this novel food situation to explore whether muzzle contact serves information acquisition – which our evidence suggests it does.

      Moreover, the processing of this food is not complex and is similar to natural foods in their environment, and we do expect individuals to be capable of reinventing it easily (and this point with Tennie’s hypothesis is actually discussed in Canteloup et al. 2021 paper) – but the point here is that their natural tendency is to be neophobic to unknown food, and therefore they do not readily eat it until they see a conspecific doing so, after which they do. And we also used this opportunity, though in a very small sample size, to investigate which individuals would overcome that neophobia and be the first to eat successfully.

      The connective tissue between the research questions set out by the authors is clearly social learning. In short: the thesis is that Migrants/Innovators bring a novel behaviour to the group, then there is 'uptake' (social learning), which may be influenced by demographic factors and muzzle-contact (biases + mechanisms). Given this focus (e.g. lines 224-264 of the Discussion), I would expect at least some of the details above to be addressed in order to provide robust support for these claims.

      See above – the reason we talk about ‘uptake’ rather than social learning is that we really see this as a case of social disinhibition of neophobia, rather than more detailed social learning such as copying or imitation, as it would be in a tool-use setting, for example (though in Canteloup et al. 2021 paper, evidence is found that the specific methods to open peanuts are socially transmitted).

      Question 2a (Lines 136-146): This data is hard to interpret without knowing how much of the group was present and visible during these exposures.

      Please see response to reviewer 1 on this.

      For example: 9% update in NH group does not sound impressive, but if only 10% of the total group were present while the rest were elsewhere, then this is 90% of all present individuals. Meanwhile if 100% of BD group were present and only experienced 31% uptake, then this is quite a striking difference between groups.

      Experiments were done at sunrise at monkeys’ sleeping site in AK, LT, NH and KB where most of the group was present in the area; we added more precision on this point in the Method section (lines 615-619).

      Of course, there is also an issue of how many individuals can physically engage with the novel food even if they want to - the presence of dominant individuals, steepness of hierarchy within that group, etc, will significantly influence this (and is all of interest with regards to the authors' research questions).

      We discuss this with respect to the result showing that higher rank individuals were more likely to extract and eat the food at the first exposure and over all four exposures

      Muzzle-contact behaviour: The authors use their data to implicate muzzle-contact in social learning, but this seems a leap from the data presented (some more on this in the Analysis section).

      We hope our distinction between information acquisition and information use is clearer now.

      For example: - What is the role of kinship in these events?

      We did not analyse kinship here, but we see a lot of targeting towards adult males, and we do not have reliable kinship data for them. We also checked (see response to reviewer 3) the muzzle contacts initiated by knowledgeable adult females, and they are mostly towards adult males, not towards related juveniles (see new figure 4D and lines 497-500).

      • Did they occur when the juvenile had free access to the food (i.e. not likely to be chased off by a feeding adult)?

      We recorded muzzle contacts visible within 2m of the box, so individuals were not necessarily eating at the box at the time of engaging in muzzle contacts. However, the majority of muzzle contacts that we could record took place directly at the edge of the box – at the location where the food is accessed – so an individual would not likely be if they were not able to have access to the food. It is possible they could be there and not eating, but they would not have been chased off, otherwise they would not be able to engage in muzzle contacts there. But it is not entirely clear what the reviewer’s point is here.

      • Did they primarily occur when adults had a mouthful of food? (i.e. could it simply be attempted pilfering/begging)

      This is not typical of this species. Very few specific individuals remove food from others’ mouths, and they do it with their hands, usually beginning with grooming their face and cheekpouches, before prising their mouth open and removing food from the victim’s cheekpouches

      • What proportion of PRESENT (not total) individuals were naïve and knowledgeable in each group for each trial (if 90% present were knowledgeable, then it is not surprising that they would be targeted more often)?

      We agree somewhat with this statement, but given the multiple ways we show the effect of knowledge – both at the individual level and the group level (effect of exposure number i.e. overall group familiarity) – we feel we present enough evidence to establish the link between knowledge of the food and muzzle contacts. We find that the model showing the interaction between exposure number and number of monkeys eating on the overall rate of muzzle contacts actually addresses this issue, because we see that when many monkeys are eating during later exposures, when many were indeed knowledgeable, the rate of muzzle contacts is massively decreased. Moreover, if 90% of the individuals present are knowledgeable, then only 10% of the individuals present are naïve, and we show both that knowledgeable individuals are targeted, but also that naïve individuals are initiators.

      • Did these events ever lead to food-sharing (In other words, how likely are they to simply be begging events)?

      We do not observe food-sharing in vervets.

      • Did muzzle-contact quantifiably LEAD to successful extraction of the food? If the authors wish to implicate muzzle-contact in social learning, it is not sufficient to show that naïve individuals were more likely to make muzzle-contact, they must also show that naïve individuals who made more muzzlecontact were more likely to learn the target behaviour.

      We disagree here, because there is a distinction between information acquisition and information use - obtaining olfactory information about a novel resource that conspecifics are eating is not the same as learning a complex tool use behaviour for which detailed observation of a model is required. We are not claiming that that muzzle contact is THE mechanism by which the monkeys learn how to eat the food – but we do believe that the clear separation between naïve individuals initiating and knowledgeable individuals being target, and the decrease of the rate of this behaviour as groups’ familiarity with the food increases – is good evidence that this behaviour functions to acquire information about a novel food.

      3) Analysis

      There are a number of issues with the current analysis which I strongly recommend be addressed before publication. Some of these are likely to simply require additional details inserted to the manuscript, whereas others would require more substantial changes. I begin with two general points (A & B), before addressing specific sections of the manuscript.

      A) My primary issue with each of the analyses in this manuscript is that the authors have fit complex statistical models for each of their analyses with no steps to ascertain whether these models are a good fit for the data. With a relatively small dataset and a very large number of fixed effects and interactions, there is a considerable risk of overfitting. This is likely to be especially problematic when predictor variables are likely to be intercorrelated (age, sex and rank in the case of this analysis).

      We have now checked for overfitting in our models.

      The most straightforward way to resolve this issue is to take a model-comparison approach. Fitting either a) a full suite of models (including a 'null' model) with each possible permutation of fixed effects and interactions (since the authors argue their analysis is exploratory) or b) a smaller set of models which the authors find plausible based on their a priori understanding of the study system. These models could then be compared using information criterion to determine which structure provides the best out-of-sample predictive fit for the data, and the outputs of this model interpreted. Alternatively, a model-averaging approach can be taken, where the effects of each individual predictor are averaged and weighted across all models in the set. Both of these approaches can be performed easily using the r package 'MuMIn'. There are also a number of tutorials that can be found online for understanding and carrying out these approaches.

      Please see our answer at the beginning of the document, detailing how we have updated our models.

      B) It does not seem that interobserver reliability testing was carried out on any of the data used in these analyses. This is a major oversight which should be addressed before publication (or indeed any re-analysis of the data).

      We have added this now and mention it above already.

      Line 444: Much more detail is needed here. What, precisely, was the outcome measure? Was collinearity of predictors assessed? (I would expect Age + Rank to be correlated, as well as Sex + Rank).

      This is now addressed (please see details above) – we use VIFs to assess multicollinearity of predictors in our models and find they are all satisfactory (see R code).

      Line 452. A few comments on this muzzle-contact analysis:

      The comments below are a little confusing as some seem to refer to the muzzle-contact rate model (previously line number 452), and some seem to refer to the initiator/receiver model. We have tried to figure out which comments refer to which, and answer accordingly.

      "We investigated muzzle contact behaviour in groups where large proportions of the groups started to extract and eat peanuts over the first four exposures"

      What was the criteria for "a large proportion"?

      All groups are now included in this analysis.

      The text for this muzzle-contact analysis would indicate that this model was not fit with any random effects, which would be extremely concerning. However, having checked the R code which the authors provided, I see that Individual has been fit as a random effect. This should be mentioned in the manuscript. I would also strongly recommend fitting Group (it was an RE in the previous models, oddly) and potentially exposure number as well.

      The model about muzzle contact rate never contained individual as a random effect because individuals are not relevant in this model – it is the number of muzzle contacts occurring during each exposure. However, the reviewer might refer here to the model that we forgot to provide the script for. Nonetheless, we have substantially revised this model, it now (Model 3) includes all groups, and has group as a random effect.

      Following on from this, if the model was fit with individual as a random effect it becomes confusing that Figure 3 which represents this data seemingly does not control for repeated measures (it contains many more datapoints than the study's actual sample size of 164 individuals). This needs to be corrected for this figure to be meaningfully interpretable.

      Figure 3 is not related to the model described in (original) line 452.

      The numbers were referring to the number of muzzle contacts, and this was written in the figure caption. However, we no longer present these details on the new figure (see Fig 4).

      Finally, would it make sense to somehow incorporate the number of individuals present for this analysis? Much like any other social or communicative behaviour, I would predict the frequency of occurrence to depend on how many opportunities (i.e. social partners) there are to engage in it.

      We have included the number of monkeys eating in our muzzle contact rate model now (Model 3) as upon further thought, we found that this was the issue leading us to want to exclude exposures, and only include the groups where many monkeys were eating. We have resolved this now by including all groups and not dropping exposures, and rather we include an interaction between number of monkeys eating and exposure number. We feel this addresses our hypothesis here much more satisfactorily. We hope these updates also address the reviewers concerns adequately.

      Line 460: "For BD and LT we excluded exposures 4 and 3, respectively, due to circumstances resulting in very small proportions of these groups present at these exposures"

      What was the criterion for a satisfactory proportion? Why was this chosen

      See above – this is now addressed.

      Line 461: "We ran the same model including these outlier exposures and present these results in the supplementary material (SM3)."

      The results of this supplemental analysis should be briefly stated. Do they support the original analysis or not?

      We no longer present this like this. We revised the model examining muzzle contact rate substantially and actually included the number of individuals eating in the model rather than excluding groups where this number was low. The results of the new model show good support our hypothesis.

      Line 465: "Due to very low numbers of infants ever being targets of muzzle contacts, we merged the infant and juvenile age categories for this analysis."

      This strikes me as a rather large mistake. The research question being asked by the authors here is "How does age influence muzzle-contact behaviour?"

      Then, when one age group (infants) is very unlikely to be a target of muzzle-contact, the authors have erased this finding by merging them with another age category (juveniles). This really does not make sense, and seriously confounds any interpretation of either age category.

      Yes we agree with this issue, and no longer do that. Rather we remove the infant data from this model, which is now Model 6, because of the large amounts of error they introduced into the model due to the small sample size. We show the process in the R code, and we describe our reasons in the text (lines 713-719). Since we are now only comparing within age- and sex-categories (see below) we do not find this decision introduces any bias.

      Lines 466-474: Why was rank removed for the second and third models? Why is Group no longer a random effect (as in the previous analysis)? The authors need to justify such steps to give the reader confidence in their approach.

      This is now addressed and discussed in descriptions of our new models.

      Furthermore - because of the way this model is designed, I do not think it can actually be used to infer that these groups are preferentially targeted, merely that adult female and adult males are LESS likely to target others than to be targeted themselves, which is a very different assertion.

      Because the specific outcome measure was not described here, this only became apparent to me after inspecting Figure 3, where outcome measure is described as "Probability of (an individual) being a target rather than initiator" - so, it can tell us that adults are more often targeted rather than initiating, but does not tell us if they are targeted more frequently than juveniles (who may get targeted very often, but initiate so often that this ratio is offset).

      We thank the reviewer for noticing this as we had indeed chosen an inappropriate model for what we were intending to measure – this has been addressed now with two additional models (Models 4 and 5; see details at the top of document). We nonetheless found the aspects of this model to still be highly interesting, so have re-framed it to focus on them.

      Lines 467-473: "Our first simple model included individuals' knowledge of the novel food at the time of each muzzle contact (knowledgeable = previously succeeded to extract and eat peanuts; naïve = never previously succeeded to extract and eat peanuts) and age, sex and rank as fixed effects. Individual was included as a random effect. The second model was the same, but we removed rank and added interactions between: knowledge and age; and knowledge and sex. The third model was the same as the second, but we also added a three-way interaction between knowledge, age and sex."

      This is a good example of some of the issues I describe above. What is the justification for each of these model-structures? The addition and subtraction of variables and interactions seems arbitrary to the reader.

      For Model 6, we no longer include rank at all, because we had not hypothetical reason to (see lines 723-725). We now begin with the three-way interaction, and only remove this, because it is not significant, and the model had problems converging as well, due to its complexity. We show this in the R script. We retain only the two separate interactions, and we do not include group as a random effect in this model due to the complexity AND because we do not think there is a theoretical requirement for it to be included here (this is explained in lines 730-735- in the manuscript. We report the results of the 3-way interaction in the supplementary material – SM3 Table S2).

      Reviewer 3

      In this study, the authors introduce a novel food that requires handling time to five vervet monkey groups, some of which had previous experience with the food. Through the natural dispersal of males in the population, they show that dispersing individuals transmit behavioral innovations between groups and are often also innovators. They also examine muzzle contact initiations and targets within the groups as a way to determine who is seeking social information on the new food source and who is the target of information seeking. The authors show that knowledgeable adults are more often the target of muzzle contacts compared to young individuals and those that are not knowledgeable.

      This is a very interesting study that provides some novel insights. The methods employed will be useful to others that are considering an experimental approach to their field research. The data set is good and analyzed appropriately and the conclusions are justified. However, there are several areas where the paper could be improved for readers in terms of its clarity.

      1) It wasn't until the Discussion that it became clear to me that the actual physiological and personality traits of dispersers were being linked with innovation. From the Title, Abstract, and Introduction, it seemed as though the focus was on dispersing males bringing their experience with a novel food to a new group to pass it on. I think it needs to be made clear much earlier in the manuscript that the authors are investigating not only the transmission of behavioural adaptation but also how the traits of dispersers might may make them more likely to innovate.

      We have now addressed this above.

      2) Early in the paper on line 28, the authors state that continued initiation of muzzle contacts by adult females could have been an effort to seek social information. This is true but another interpretation is that females were imparting or giving social information. It seems important here and elsewhere (lines 322-323) to consider and report the target of these initiations. If these were directed at more knowledgeable individuals, it supports the idea that this was social information seeking. If muzzle contacts were directed to younger or unknowledgeable individuals, it would imply a form of teaching, which is possible but perhaps unlikely, so I think the authors need to be totally clear here.

      We thank the reviewer for pointing this out We looked into our data and now present figure 4D, showing that almost all knowledgeable adult females’ muzzle contacts were targeted towards knowledgeable adult males and talk about it in the discussion (lines 499-500).

      3) The argument made on lines 344-350 needs more fleshing out to be convincing or it should be deleted. The link between number of dispersers, social organization, and large geographic range seems a little muddled. There are many dispersing individuals in species that are not typically in large multi-male, multi-female social organizations. Indeed, in many species both sexes disperse. Think of pair living birds where both sexes disperse and geographic range can be enormous. There are also no data or references presented here to show that species in multi-male, multi-female social organizations do have larger geographic ranges than those that are not in these social organizations. It seems to me that, even if this is the case, niche is more important than social organization, for instance not being dependent on forests to constrain much of your range.

      We have removed this section

    1. biggest insights

      It was surprising to see that stories do not need to initially fit into a clear theme as long as you are able to tie it all together in a coherent way at the end. Your intention does not need to be in your face obvious—it can slowly become apparent as the story progresses. In fact, it's more effective this way than if you just bludgeon the reader over the head with your message. I learned that essays that incorporate many stories and essays that incorporate only one can be equally effective, although I think at this stage I'm partial to essays that don't include more than, say, four or so stories.

    1. Create a new controller to override the original: app/controllers/active_storage/blobs_controller.rb

      Original comment:

      I've never seen monkey patching done quite like this.

      Usually you can't just "override" a class. You can only reopen it. You can't change its superclass. (If you needed to, you'd have to remove the old constant first.)

      Rails has already defined ActiveStorage::BlobsController!

      I believe the only reason this works:

      class ActiveStorage::BlobsController < ActiveStorage::BaseController

      is because it's reopening the existing class. We don't even need to specify the < Base class. (We can't change it, in any case.)

      They do the same thing here: - https://github.com/ackama/rails-template/pull/284/files#diff-2688f6f31a499b82cb87617d6643a0a5277dc14f35f15535fd27ef80a68da520

      Correction: I guess this doesn't actually monkey patch it. I guess it really does override the original from activestorage gem and prevent it from getting loaded. How does it do that? I'm guessing it's because activestorage relies on autoloading constants, and when the constant ActiveStorage::BlobsController is first encountered/referenced, autoloading looks in paths in a certain order, and finds the version in the app's app/controllers/active_storage/blobs_controller.rb before it ever gets a chance to look in the gem's paths for that same path/file.

      If instead of using autoloading, it had used require_relative (or even require?? but that might have still found the app-defined version earlier in the load path), then it would have loaded the model from activestorage first, and then (possibly) loaded the model from our app, which (probably) would have reopened it, as I originally commented.

    1. Money fear # 1: Paying attention will be painful Sometimes it feels easier to stick our head in the sand and ignore a (potential) problem. For example, most people know that it’s not financially prudent to spend all income on lifestyle expenses. They probably realise that they should be investing/saving some of their income. But to do that, they will have to admit to themselves (and maybe others) that they have been doing the wrong thing in the past. It feels less painful to ignore the issue and “get to it one day”. The problem with ignoring financial misbehaviours is that they magically don’t disappear. They compound. Just like good financial decisions compound, so do bad ones. The longer you ignore it, the worse the consequences will be.

      .c1

    1. Somehow it was my fault that the police and media focused on me at Meredith’s expense. The result of this is that 14 years later, my name is the name associated with this tragic series of events I had no control over.

      It's really very sad that I've never heard her roommates name until now, only hers, but the media only found her involvement the most interesting, I guess and decided to take that and run with it. Her life was undoubtedly changed forever after this ordeal and her hopes of returning to normalcy are nothing more than a dream that will never happen. How does someone come back from that? I understand she's trying to charge on regardless of the judgement but if it were me, I'd completely change everything just to have some sense of peace.

    1. But it's not a trivial problem. I have compiled, at latest reckoning, 35,669 posts - my version of a Zettelkasten. But how to use them when writing a paper? It's not straightforward - and I find myself typically looking outside my own notes to do searches on Google and elsewhere. So how is my own Zettel useful? For me, the magic happens in the creation, not in the subsequent use. They become grist for pattern recognition. I don't find value in classifying them or categorizing them (except for historical purposes, to create a chronology of some concept over time), but by linking them intuitively to form overarching themes or concepts not actually contained in the resources themselves. But this my brain does, not my software. Then I write a paper (or an outline) based on those themes (usually at the prompt of an interview, speaking or paper invitation) and then I flesh out the paper by doing a much wider search, and not just my limited collection of resources.

      Stephen Downes describes some of his note taking process for creation here. He doesn't actively reuse his notes (or in this case blog posts, bookmarks, etc.) which number a sizeable 35669, directly, at least in the sort of cut and paste method suggested by Sönke Ahrens. Rather he follows a sort of broad idea, outline creation, and search plan akin to that described by Cory Doctorow in 20 years a blogger

      Link to: - https://hyp.is/_XgTCm9GEeyn4Dv6eR9ypw/pluralistic.net/2021/01/13/two-decades/


      Downes suggests that the "magic happens in the creation" of his notes. He uses them as "grist for pattern recognition". He doesn't mention words like surprise or serendipity coming from his notes by linking them, though he does use them "intuitively to form overarching themes or concepts not actually contained in the resources themselves." This is closely akin to the broader ideas ensconced in inventio, Llullan Wheels, triangle thinking, ideas have sex, combinatorial creativity, serendipity (Luhmann), insight, etc. which have been described by others.


      Note that Downes indicates that his brain creates the links and he doesn't rely on his software to do this. The break is compounded by the fact that he doesn't find value in classifying or categorizing his notes.


      I appreciate that Downes uses the word "grist" to describe part of his note taking practice which evokes the idea of grinding up complex ideas (the grain) to sort out the portions of the whole to find simpler ideas (the flour) which one might use later to combine to make new ideas (bread, cake, etc.) Similar analogies might be had in the grain harvesting space including winnowing or threshing.

      One can compare this use of a grist mill analogy of thinking with the analogy of the crucible, which implies a chamber or space in which elements are brought together often with work or extreme conditions to create new products by their combination.

      Of course these also follow the older classical analogy of imitating the bees (apes).

    1. Rather, our knowledge that we're "just playing a game" works emotionally to intensify our feelings about the goal: exercise, sex, labor, etc. takes on an element of sacred seriousness through gamification, which separates it from the profane and thus increases its intensity via ambivalence.

      It seems like what you're arguing (re my earlier comment about professional poker) is that whether or not there are stakes/it's "just for fun" vs "deeply serious" doesn't actually matter at all to phenomenology? Cuz a gamified workplace is not "just a game" and mgmt I think puts real pressure/stakes on ("here are the consequences of not hitting scores..."). It's only a game in a very token sense of taking on some of the formal properties (and thereby channeling certain psychological dynamics) of games.

      I guess I'm slightly confused b/c sometimes it feels like you distinguish deadly seriousness from "just for fun," and sometimes they seem to coincide (eg in Huizinga's religion+play are same thing idea)—and then there're the levels of affect vs knowledge, and whether something "is" or "isn't" play on each of these levels.

      On which "level" (affect/knowledge) is gamified activity serious vs for fun, and what concretely would a player "somehow chang[ing] their relationship... in terms of knowledge or in terms of affect" look like w/r/t this?

      I think in general clarifying the relationships between these four possibilities (known seriousness, affective seriousness, known play, affective play) at the end of this piece, vis-a-vis these examples of dating and gamification, would help clarify a lot—I sorta expect the pieces to cohere but am left more confused by how they relate than when I started the bullet point

    2. Spinoza

      Have you noticed (I'm sure you have) it feels to me like nearly every non-analytic-inclined philosopher I read either cites Leibniz or Spinoza (but rarely both at the same time), and it's typically a metaphorical reading of their #ontological-flex frames applied to whatever the citing philosopher is talking about

      You reckon it's just baked deep into the canon, or that these guys are (almost like scifi writers, or the Bible) exceptionally good at producing resonant images, dynamics, etc?

    1. @52:20

      We know it will happen, barring some radical change in human psychology, because that is what we're living with now. Everyone is walking around with a smartphone in their pocket that not even the president of the United States could have gotten his hands on in, you know, the year 2000. It's pure science fiction. And yet now it's just a basic necessity of life [...] we reset to the new level, and again, we keep comparing ourselves to others

      There's a submarine counterargument to the overall point here (and the last half of the sentence quoted here), and it lies in the words "necessity" and "new level". (I realize that when it was spoken, "necessity" was chosen for effect and meant as a slight exaggeration, but it's not as exaggerated as it would need to be in order to erase the force behind of the counterargument.)

      In other discussions like these, you can often find people bringing up the argument that Keynes's remark about 15-hour work weeks wasn't wrong, provided that you're willing to accept the standards of living that existed at the time when Keynes was saying it. But that's not exactly true, because it doesn't really ever come down to a true choice of deciding whether you'd like to opt in or not.

      You could take the argument about smartphones and make the same one by swapping out automobiles instead. The problem is that even if you did desire to opt out of the higher standards, the difficulty lies in the fact that the society that exists around you will re-orient itself such that access to a car is a baked-in requirement—because it's taken as a given in others' own lives, it gets absorbed into their baseline of what affordances they expect to be available to people who are not them ("new level"). This continual creation of new requirements ("necessities") is the other culprit in the pair that never gets talked about in these conversations. Everyone focuses on the personal happiness and satisfaction component wrt comparison to others.

    1. It really slows down your test suite accessing the disk.So yes, in principle it slows down your tests. There is a "school of testing" where developer should isolate the layer responsible for retrieving state and just set some state in memory and test functionality (as if Repository pattern). The thing is Rails is a tightly coupled with implementation logic of state retrieval on core level and prefers "school of testing" in which you couple logic with state retrial to some degree.Good example of this is how models are tested in Rails. You could just build entire test suite calling `FactoryBot.build` and never ever use `FactoryBot.create` and stub method all around and your tests will be lighting fast (like 5s to run your entire test suite). This is highly unproductive to achieve and I failed many times trying to achieve that because I was spending more time maintaining my tests then writing something productive for business.Or you can took more pragmatic route and save database record where is too difficult to just 'build' the factory (e.g. Controller tests, association tests etc)Same I would say for saving the file to the Disk. Yes you are right You could just "not save the file to disk" and save few milliseconds. But at the same time you will in future stumble upon scenarios where your tests are not passing because the file is not there (e.g. file processing validations) Is it really worth it ? I never worked on a project where saving file to a disk would slow down tests significantly enough that would be an issue (and I work for company where core business is related to file uploading) Especially now that we have SSD drives in every laptop/server it's blazing fast so at best you would save 1 seconds for entire test suite (given you call FactoryBot traits to set/store file where it make sense. Not when every time you build an object.)
    1. It makes it really hard often to reason about the impact of this kind of work, because there are no easy metrics. One of the takeaways that I take from it is that making tools easy to use, fast to use, and pleasant to use is really powerful. It’s really powerful in ways that are hard to predict until you’ve done it, and so you should just take it as axiomatic that it’s worth a little bit more time than your organization otherwise would spend investing in tool quality, because people will change how they relate to those tools.They’ll find new ways to use it. They’ll use them more often. It often leads to this productivity flywheel in somewhat non-obvious ways.

      Surprise! The point of technology is that it's supposed to make things easier. Why not make sure it's easy to make things easy while you're at it?

    2. it shows that any time that you make something easier or harder to do, either because it’s faster or slower, or just because you’ve reduced the number of steps, or you’ve made the steps more annoying, or you’ve added cognitive overhead, then people react by changing how they use your tool.
    1. a lot of the time i get a lot of questions these days

      @3:07:14:

      Blow: A lot of time I get a lot of questions these days by people who are asking, "How do you do a lot of work?", or, "How do you get started?" and all that. And very often these questions are themselves a procrastination, right? It's like, "Obviously, I'm in the state where I can't do a lot of work right now. So I need somebody to give me the answer before I can." And actually the secret is you sit down and decide to do it. That's all it is, right?

      Jaimungal: Seinfeld is like that. That's his famous advice to comics, to comedians.

      Blow: Mmm. Yeah. I mean—

      Jaimungal: Comedians always want to know what's the secret. He says, "Just work. Stop talking about it."

      Blow: Yeah. [...] Because that's an exc— it's like, "Oh, someday— I have permission to not actually do this work until someday somebody bestows upon me the magical[...] baton[...]"

    2. i mean i have a whole speech about that

      @03:06:54:

      Blow: I mean I have a whole speech about that that I can link you to as well.

      Should that be necessary? "Links" (URLs) are just a mechanical way to follow a citation to the source. So to "link you" to it is as easy as giving it a name and then saying that name. In this case, the names are URLs. Naming things is said to be hard, but it's (probably) not as hard as advertised. It turns out that the hard part is getting people to actually do it.

    1. The thing that bugs me when I listen to the Muse podcast—it's something that's present here along with the episode with gklitt—is that there's this overarching suggestion that the solution to this is elusive or that there are platform constraints (especially re the Web) that keep any of these things from being made. But lots of what gets talked about here is possible today, it's just that no one's doing it, because the software development practices that have captured the attention of e.g. GitHub and Programmer Twitter value things that go against the grain of these desires. This is especially obvious in the parts that mention dealing with files. You could write your Web app to do that. So go do it! Even where problems exist, like with mobile OSes (esp. iOS), there're things like remoteStorage. Think remoteStorage sucks? Fine! Go embrace and extend it and make it work. It's not actually a technical problem at this point.

    2. @18:52:

      I wanna also dig a little more into the kind of... dynamism, ease-of-making-changes thing, because I think there's actually two ways to look at the ease of making changes when you solve a problem with software. One way is to make software sufficiency sophisticated so that you can swap any arbitrary part out and you can keep making changes. The other is to make the software so simple that it's easy to rewrite and you can just rewrite it when the constraints change.

    1. companion to the skills programme’s labs

      Maybe, if it's already known, you can call it IMXXX Skills programme, just so that students already know what exactly session you talk about. they usually got very confused in first week what is a module, what is skills etc.

    1. here exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”

      It's important to first understand how something was done and why before removing or shifting away from it; if the point isn't understood, the rework or reconstruction is likely to be just as bad - if not worse - because it will surely have an overlapping set of flawed and exploitable issues.

    1. In general, the greater the productiveness of labour, the less is the labour time required for the production of an article, the less is the amount of labour crystallised in that article, and the less is its value; and vice versâ, the less the productiveness of labour, the greater is the labour time required for the production of an article, and the greater is its value

      A shirt might be more essential than an airpod pro, which is just a luxury item, but a shirt required less time to produce than an airpod therefore the airpod are generally almost 5 times more expensive. It's funny how it's labour time that determine the price and not supply and demand

    1. what happens um when we're thinking about our inner states one of the things that we need to recognize is that our introspection when 00:22:54 we we become aware of our beliefs our desires and our hopes and our fears and so forth is all done through language and on the model of language when i decide that i believe that john dunn 00:23:07 gave a great talk this morning when i believe that hal roth is a great scholar of zen and when i believe that alan wallace gave us a beautifully inspirational talk about the role of practice and contemplation in the 00:23:19 understanding of the self and i introspect that way i'm using those sentences alan gave that great talk john gave us a great talk about pramana and so forth as models for my inner states and i'm not 00:23:32 doing that because i looked inside and saw little english sentences in my brain i'm looking i'm doing that by using language as a kind of introspective model that's a matter of self-interpretation 00:23:44 it's easy to forget that because it feels so immediate so language gives us the concepts that we use to think about the world but it is also the model for the concepts of our propositional attitudes like belief 00:23:58 desire knowledge and so forth and as a model we have to recognize that the model the map isn't the reality to go back to what john uh reminded us of he reminded us of earlier introspection in 00:24:11 terms of language gives us an interpretation it doesn't give us an independent reality that is being interpreted and when we think about the madhyamaka 00:24:23 of nagarjuna and chandrakiri we remember that to be empty is to be empty of any intrinsic nature and if we follow chandra charity as i suggested earlier that means that it is to exist only 00:24:37 dependent on conceptual imputation and what i am suggesting now is that all of our inner cognitive states that we introspect we encounter only through a conceptual imputation only through 00:24:50 interpretation only through language and that is they exist conventionally not intrinsically even though they might appear to us to exist just as we see 00:25:02 them and to do so intrinsically

      Another key point:

      Language is the tool we use for introspection and as Nagarjuna and Chandrakirti hold, are empty of intrinsic nature. All inner cognitive states that we introspect are attained only through linguistic conceptual imputation so can only exist conventionally and not intrinsically.

      This underscores the importance of the symbolosphere, of symbols and language.

    2. another way to put this is um that if you think about using a telescope an example that alan offered us a little while ago to examine celestial objects as indeed did 00:20:37 galileo you can only interpret the output of that telescope the things you see in the telescope correctly if we actually know how it works that's really obviously true about 00:20:48 things like radio telescopes and infrared telescopes but it's true of optical telescopes as well as paul fire opened emphasized if you don't have a theory of optics then when you aim your telescope at jupiter and look at the 00:21:00 moons all you see are bits of light on a piece of glass you need to believe to know how the telescope works in order to understand those as moons orbiting a planet 00:21:12 so to put it crudely if we don't know how the instrument that we're using uh if we don't know how the instrument that we're using to mediate our access to the world works if we don't understand it we don't know whether we're looking through 00:21:24 a great telescope or a kaleidoscope and we don't know whether we're using a pre a properly constructed radio telescope or just playing a fantasy video game

      Good example of how astronomers must know the physical characteristics of the instrument they use to see the heavens, a telescope before anything they observe be useful. The same is true when peering into a microscope.

      The instrument of our bodies faculties is just as important to understand if we are to understand the signals we experience.

    3. the illusion that pervades our sense perception is that what we experience is something external to us that somehow 00:20:10 we've got a world that exists as it is independent of us and that we simply happen to be perfect world detectors and we wander through it detecting things just as they are

      This is a key statement of our illusion. We sense that what we experience is the way the world actually is, not seeing that our bodies play a huge role in what we observe. We don't know what it's like to be a bat!

    4. these four philosophers in very different ways but 00:18:14 in ways that intersect in very intriguing ways um emphasized that with knowledge sensory knowledge perceptual knowledge isn't the direct access um to sensory properties or descent or 00:18:28 objects around us but rather is always the result of a complex cognitive and perceptual process mediated by our sense organs our sense faculties and our cognitive processes 00:18:41 and just another way to put that this is a way that paul churchill puts this but also um paul fire robin also sellers is that the objects we experience the 00:18:52 world that we inhabit the world in which we find ourselves embedded is not a world that we simply find it's a world that we construct and whose constituents we construct in our central nervous 00:19:06 systems in response to sensory stimulation transduced by neural impulses that is somehow our bodies interact with other bodies in the world um our and through various kinds of 00:19:19 cognitive processes

      These 4 American scientists articulate that we don't simply sense a world out there. Rather, our sensory and cognitive faculties CONSTRUCT what appears to us.

    5. he distinguishes three dimensions of dependent origination and this is in his commentary on the guardian of malama jamaica carica called clear words he talks about causal dependence that is every phenomenon depends upon causes and 00:16:19 conditions and gives rise to further causes and conditions um myriological dependence that is every phenomenon every composite phenomenon depends upon the parts that uh that it 00:16:31 comprises and every phenomenon is also dependent upon the holes or the systems in which it figures parts depend on holes holes depend on parts and that reciprocal meteorological dependence 00:16:44 characterizes all of reality and third often overlooked but most important is dependence on conceptual imputation that is things depend in order to be represented as the kinds of 00:16:57 things they are on our conceptual resources our affective resources and as john dunn emphasized our purposes in life this third one really means this um 00:17:09 everything that shows up for us in the world the way we carve the world up the way we um the way we experience the world is dependent not just on how the world is but on the conceptual resources 00:17:22 as well as the perceptual resources through which we understand the world and it's worth recognizing that um when we think about this there are a bunch of um contemporary majamakers majamikas we 00:17:34 might point to as well and so paul fireauben who's up there on on the left well really an austrian but he spent much of his life in america um willard van norman kwine um up on the right wilford sellers and paul churchland

      This is a key statement: how we experience the world depends on the perceptual and cognitive lens used to filter the world through.

      Francis Heylighen proposes a nondual system based on causal dependency relationships to serve as the foundation for distributed cognition.(collective intelligence).

      https://hyp.is/go?url=https%3A%2F%2Fbafybeicho2xrqouoq4cvqev3l2p44rapi6vtmngfdt42emek5lyygbp3sy.ipfs.dweb.link%2FNon-dualism%2520-%2520Mind%2520outside%2520Brain%2520%2520a%2520radically%2520non-dualist%2520foundation%2520for%2520distributed%2520cognition.pdf&group=world

    1. Note: This rebuttal was posted by the corresponding author to Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      1. General Statements

      We would like to thank the reviewers for their insightful and useful comments about our manuscript. Based on these comments and as outlined in our revision plan, we plan to strengthen our findings by performing new experiments and quantitative analyses. This particularly applies to our nanoscale (dSTORM) imaging dataset which was discussed by multiple reviewers.

      We also appreciate the reviewers’ overall positive evaluation of the significance of our labeling method for the axon initial segment studies. With regards to this, we would like to highlight that this manuscript particularly addresses the labeling of “difficult-to-label” neuronal proteins, such as large ion channels and transmembrane proteins. Although we and another group have recently reported click labeling of neurofilament light chain (PMID: 35031604) and AMPAR regulatory proteins (PMID: 34795271) in primary neurons, both of these proteins have a small size between ~30-68 kDa and compared to larger ion channels/transmembrane proteins are “easier” to express in primary neurons. The novelty in the current manuscript is that we successfully applied this method for the labeling of large and spatially restricted AIS components, such as NF186 and Nav1.6 (186 and 260 kDa, respectively). As some of the reviewers also pointed out, the size and complexity of these proteins makes labeling of the AIS rather challenging. We also used our approach to study the localization of epilepsy-causing Nav1.6 variants and could exclude the retention in the cytoplasm as a possible cause of their loss of function. Finally, we improved the efficiency of genetic code expansion in primary neurons by developing AAV-based viral vectors. Although AAVs are routinely used for gene delivery to neurons, AAVs for click-based labeling need to encode multiple components of the orthogonal translational machinery for genetic code expansion. By trying different promoters and gene combinations, we developed several variants that enable high efficiency of the genetic code expansion in neurons. On their own, these findings will facilitate further genetic code expansion and click chemistry studies, beyond the labeling of the axon initial segment.

      2. Description of the planned revisions

      Reviewer #2

      • On lines 107 and 108, the sentence "The C-terminal HA-tag allowed us to detect the full-length NF-186 protein by immunostaining it with an anti-HA antibody" would have a better place just after lines 104-105 " [...] we modified the previously described plasmid (Zhang et al., 1998) by moving the hemagglutinin (HA) tag from the N terminus to the C terminus".

      We will modify the text as the reviewer suggested.

      • Fig.2b: the AnkG staining looks substantially longer than that showed in c. However, the results on AIS length show no significant changes in between the groups. This is visually misleading, the authors should choose a picture for the WT construct that is representative of the data.

      We thank the reviewer for bringing this up. We will replace the panel in Fig.2b with a more representative image of NF186 WT construct in the revised version of the manuscript.

      • Line 238: what is the rationale behind choosing these cells? For example, have they been used in other studies for similar purposes? If so, please provide the reference.

      We initially probed neuroblastoma ND7/23 which are commonly used for the electrophysiological recordings of recombinant Nav1.6 (PMID: 30615093, 22623668, 25874799, 27375106). Although we were able to record Na+ currents in those cells, only a small portion of channels was detected on the cell surface by microscopy (Suppl. Fig. 5a). As we discuss in the manuscript (lines 237-240), we then switched to N1E-115-1 cells in which we obtained a higher level of expression of the recombinant NaV1.6 channels on the cell surface (Suppl. Fig. 5b). These cells have also been previously used for the electrophysiological studies of voltage-gated sodium channels, including Nav1.6 (PMID: 8822380, 24077057). We will modify the text and include these references in the revised manuscript.

      • Figure 3c, the authors omitted the comparison with the WT construct this time, as opposed to the neurofascin experiments. What is the reason?

      As shown by others (PMID: 31900387) and us in this manuscript, one of the main issues with the expression of the recombinant NF186 in neurons was that overexpression led to mislocalization of NF186 in neuronal soma and processes. This was particularly true for WT construct and certain amber mutants (e.g. K809TAG). Based on previous reports (PMID: 31900387), we then tested a weak human neuron-specific enolase promoter. This reduced expression level and improved localization of NF186. However, since we still observed some neurons with mislocalized NF186 WT even with the enolase promoter, we found it important to quantitively compare the AIS length of WT construct and amber mutants to surrounding untransfected cells. On the other hand, since we did not have overexpression and mislocalization problem with Nav1.6 WT construct (all observed neurons have signal localizing in the AIS), we measured only the AIS length of the amber mutants. However, to avoid any confusion, we will also measure the AIS size of the neurons expressing Nav1.6 WT construct and compare it to surrounding cells and amber mutants. For this, we will need to perform new experiments and acquire new images. We will include the data in the revised manuscript.

      • Fig. 4: why did the authors chose these cells for electrophysiology experiments and not neurons? Explain the rationale in the text or, alternatively, cite similar studies using the same tool.

      Due to the branched neuronal processes which cause the space clamp problem in voltage clamp experiments with neurons, round and none-branching cells are frequently used to examine the biophysical properties of ion channels, including Nav1.6. By far, most of studies investigating the biophysical properties of NaV1.6 channels were performed in neuroblastoma cells e.g. ND7/23 and N1E-115-1 cells (PMID: 25874799; 25242737). We tested these two types of cells and found that N1E-115-1 cells supported higher expression level of the recombinant NaV1.6 channels on the cell surface than the ND7/23 cells (Suppl. Fig 5). Hence, N1E-115-1 were more suitable to get robust and reliable recordings (as we also discuss above in the response to reviewer’s comment). We will clarify this in the revised manuscript.

      • Fig.4, biophysical properties: did the authors find differences in passive properties? Measures of resting potential, membrane resistance and cell capacitance should be reported.

      Passive properties such as resting membrane potential and membrane resistance are important functional features in neurons measured in current clamp experiments, but not applicable for ND7/23 and N1E-115-1 cells used in our voltage clamp experiments. To measure the Na+ current mediated by WT or mutant NaV1.6 channels expressed in N1E-115-1 cells, the endogenous Na+ channels were blocked by tetrodotoxin and the endogenous K+ channels were blocked by tetraethylammonium chloride, CsCl and CsF in extracellular and intracellular solutions. Under these conditions, resting potential and membrane resistance are not relevant for experiments. Cell capacitance reflects the size of the cell surface area, which can affect the number of channels expressed on the cell surface. To eliminate the effect of different cell sizes, Na+ current densities normalized by cell capacitances were used in our experiments. We will report on these values in the revised manuscript.

      • Fig 4, STORM images. The periodic distribution of the dots should be enhanced with some sort of arrows or lines, for the non-specialist audience.

      Based on the comments from multiple reviewers, we plan to obtain additional dSTORM images of the neurons expressing recombinant Nav1.6 WT or amber mutants. We also intend to improve the visualization of these results by updating/modifying existing figures and including quantitative data.

      • Line 374: rat or mouse primary neurons?

      We are here referring to both, rat and mouse neurons. The images shown in Fig. 06 and Suppl. Fig. 08 were obtained from rat cortical neurons expressing Nav1.6 or fluorescent reporter. However, we were also able to successfully transduce mouse neurons with AAV92A carrying orthogonal translational machinery (data not shown). We will clarify this in the revised manuscript.

      **Referees cross-commenting**

      I fully agree with the following remarks from Reviewers #3, #4 and #5. This is a point that I have raised in my report too. The authors need better images to show the periodicity we visualization, and a quantification would be of great benefit to support the claim with numbers (and how these compare to similar studies in the literature):

      R3: 2. For the dSTORM analysis of the tagged Nav1.6 protein, I also cannot tell there is periodic organization from the image directly. Some analysis is needed there. R4: 2."As there was no obvious difference in the nanoscale organization of the NaV1.6WT 317 -HA or NaV1.6TAG 318 -HA channels (Fig. 4. e-g), these experiments confirmed that the NaV1.6 overexpression, TCO*A319 Lys incorporation, and click labeling did not affect the nanoscale periodic organization of the sodium channels in the AIS." It is clearly noticeable that for WT, the spot density is more compared to the other two mutants. Why is that so? Using cluster analysis, one can quantify spot density and discuss nanoscale organization quantitatively. The author should quantify the periodicity and compare it among different variants and with previous reports. R5: 3. The authors claim that there was no obvious difference in the nanoscale organization of the NaV1.6WT 317 -HA or NaV1.6TAG 318 -HA channels (Fig. 4. e-g), but it is hard to conclude this without any quantification and statistical analysis. Sodium channels have been shown to be associated with the membrane-associated periodic skeleton structures in neurons and average autocorrelation analysis has been developed to quantify the degree of periodicity of such structural organizations (Han et al. PNAS 114(32)E6678-E6685, 2017). The authors should use this approach to quantify and compare the average autocorrelation amplitudes.

      As we outlined in our responses to the individual reviewers’ comments below, we will address these questions by performing new experiments and quantifications.

      I also agree with these comments from Reviewers #3 and #5:

      R3: 4. It is unclear, for all the presented data, whether all the cells are collected from a single biological replicate or from multiple replicates. At least 2-3 replicates are needed to see the reproducibility in terms of labeling efficiency, and other related conclusions. R5: 1. The authors should indicate how many replicates were performed and how many cells were analyzed for each experiment.

      We thank the reviewers for bringing this up. By mistake, we omitted this important information. We will include it in the revised manuscript, but we would like to highlight here that each experiment was repeated at least 3 times.

      Reviewer #3

      1. There is some patch-like background from the 488 channel from the click reaction, some of which have very as strong signal as the staining on the neurons. What is the potential cause for this? With immunostaining on HA, the background doesn't affect too much on the image data interpretation. However, the major goal of this method development is to use it in live-cell without immunostaining. Without another reference, the high background might cause issues in data interpretation. Can the author also suggest way to avoid or lower this in the discussion?

      We thank the reviewer for bringing this up. We have occasionally observed patch-like background in what appears to be the cell debris. Such dead cells do not have an intact cell membrane and therefore can absorb cell-impermeable ATTO488-tetrazine dye during click labeling. This kind of background is also present in the control neurons transfected with the WT Nav1.6, which suggests that it originates from the UAA and tetrazine-dye accumulations. Additionally, since these patches are not visible with the immunostaining, they do not contain our protein of interest, which further confirms that they contain only dye and UAA accumulations. Depending on the neuron prep/quality before and after transfections, the presence of these patches is more or less obvious. However, despite the background we did not have problems identifying AIS during live cell imaging. Especially when overall neuronal health is optimal after transfections, AIS can easily be distinguished from patches that are positioned outside of labeled neurons. We will investigate this further and discuss it in the revised manuscript.

      1. For the dSTORM analysis of the tagged Nav1.6 protein, I also cannot tell there is periodic organization from the image directly. Some analysis is needed there.

      We will address this in the revised manuscript by performing additional experiments and quantifications. We also wrote a detailed answer below, in the response to the other reviewers.

      1. The authors use the AIS length as a parameter to evaluate the function of the clickable mutant of NF186, and using patch clamp for functional validation of the clickable mutant of Nav1.6. In both cases, the comparison is done between the mutant and the WT construct, but both in transfected cell and exogenously expressed. It's also worth comparing with untransfected cells as the true native situation.

      We agree with the reviewer that it is important to compare transfected cells with untransfected cells. As the reviewer points out, we have already performed some of these comparisons. When it comes to the NF186, we used the AIS length as a parameter to estimate if the expression of clickable mutant affected the AIS structure. As we show in the Fig. 02, we co-immunostained neurons transfected with NF186-HA WT or TAG constructs. We used HA antibody to detect neurons expressing NF186, while the ankG was used as a marker of the AIS length. To check if the AIS length of transfected cells is affected, we compared the length of transfected cells (expressing NF186, HA+) to surrounding untransfected cells (HA-). When it comes to the Nav1.6, we also compared the AIS length of cells expressing Nav1.6 (HA+) to surrounding untransfected cells (HA-). Similarly to the experiments with NF186, this allowed us to check if the expression of the recombinant Nav1.6 affect the AIS structure. What is missing is the comparison with untransfected conditions (i.e. neurons that are simply stained with ankG). We assume that is what the reviewer is referring to? We will also include these data in the revised manuscript. Furthermore, since we introduced a labeling modification in NaV1.6, we wanted to check if such modification would affect its function. To do so, as routinely done in the field (PMID: 25874799), we rendered the WT and TAG channels TTX-resistant and recorded only recombinant Na+ currents in neuroblastoma cells in the presence of TTX. Perhaps we misunderstand the reviewer’s comment, but in this regard measurements of untransfected cells are not relevant since they would not allow us to compare WT and TAG mutants.

      1. It is unclear, for all the presented data, whether all the cells are collected from a single biological replicate or from multiple replicates. At least 2-3 replicates are needed to see the reproducibility in terms of labeling efficiency, and other related conclusions.

      We thank the reviewer for the observation. By mistake, we omitted this important information. We will include in the revised version of the manuscript. We would like to highlight here that each experiment was repeated at least 3 times.

      Reviewer #4

      1."Confocal microscopy revealed that the hNSE promoter lowered the WT and clickable NF186-HA expression levels and consequently improved the localization of these proteins." Is the lower expression level a measure of localization improvement? How does the author conclude that the localization has improved?

      Previous report (PMID: 31900387) suggested that the overexpression of the recombinant WT NF186 can affect its trafficking, leading to the NF186 mislocalization. We observed the same in our experiments with CMV NF186 (in particular for NF186 WT). Hence, based on the PMID: 31900387 we probed weak neuron specific enolase promoter. Since the WT was the most problematic in terms of the ectopic expression, we checked if AIS localization was improved with enolase promoter for this construct. To this aim, we counted number of neurons that with mislocalized signal or with the signal in the AIS for both, CMV and enolase promoter. We could observe that number of neurons with mislocalized signal was lower for enolase promoter. Since there were more neurons with the AIS-specific signal when NF186 was expressed from enolase promoter compared to CMV, we concluded that enolase promoter lowered expression and improved localization of the NF186. Therefore, we used enolase promoter for click labeling of NF186 amber mutants. We will include the results of this analysis in the revised version of the manuscript.

      2."As there was no obvious difference in the nanoscale organization of the NaV1.6WT 317 -HA or NaV1.6TAG 318 -HA channels (Fig. 4. e-g), these experiments confirmed that the NaV1.6 overexpression, TCO*A319 Lys incorporation, and click labeling did not affect the nanoscale periodic organization of the sodium channels in the AIS." It is clearly noticeable that for WT, the spot density is more compared to the other two mutants. Why is that so? Using cluster analysis, one can quantify spot density and discuss nanoscale organization quantitatively. The author should quantify the periodicity and compare it among different variants and with previous reports.

      We thank the reviewer for these suggestions. We will address these remarks by performing additional new experiments and quantifications. The difference in the level of the expression of the recombinant Nav1.6 might explain differences in the spot density for WT vs. TAG clickable mutants. However, as the reviewer suggested quantitative analysis is needed to address these concerns. We also intend to quantify the periodicity and compare it among different variants and with previous reports. It is just important to note that in the current version of the manuscript we looked at the nanoscale organization of the subset of Nav1.6 channels. The reason being that we used anti-HA antibody which will only detect our recombinant protein which got incorporated into the AIS and not the endogenous Nav1.6.

      Minor comments

      1."Although NF186K809TAG 158 -HA (Supplementary Fig. 4) showed bright click labeling, we excluded it from the analysis due to its frequent ectopic expression along the distal axon." How frequently is this bright click labeling observed for this mutation? Is it not observed for other mutations at all? The authors should state this point clearly with some statistics.

      We are not sure what is the exact question from the reviewer. If we understand it correctly, the reviewer is asking us to quantify how frequent was the ectopic expression of this amber mutant compared to other mutants? And not the click labeling (as written in their original comment), since click labeling was observed for all the mutants independently of their ectopic expression?

      2."Immunostaining with anti-HA antibody revealed that the expression of NaV1.6WT 239 -HA on the membrane of the N1E-115-1 cells was higher than on the ND7/23 cells (Supplementary Fig. 5a-c). However, click labeling of both NaV1.6K1425TAG 240 -HA and NaV1.6K1546TAG 241 -HA with ATTO488-tz was not successful (Supplementary fig. 5d) indicating insufficient expression of the clickable constructs." Is this due to insufficient expression level or accessibility? The author should make this statement clear.

      We thank the reviewer for bringing this up. We will clarify this in the revised version of the manuscript. We believe that the click labeling of the K1546TAG mutant in N1E-115-1 cells is absent due to the insufficient expression of the channels on the membrane, since this mutant was successfully labeled in the primary neurons that represent more native environment and where Nav1.6 form high-density clusters. K1425TAG mutant is not labeled due to the insufficient expression on the membrane in N1E-115-1 cells as well. However, since this mutant is also poorly labeled in primary neurons, we can speculate that K1425TAG position might be less accessible for the tetrazine-dye compared to K1546TAG. To further support our claim that due to the insufficient expression click labeling is low/absent in neuronal cells, we can use NF186 as an additional example. When NF186 was expressed from strong CMV promoter, we observed click labeling for all the mutants in ND7/23 cells (Suppl. Fig.01). However, when CMV was replaced with neuron specific enolase promoter, the expression was of NF186 was substantially lower in ND7/23 cells and click labeling was absent (data not shown). We will clarify this in the revised manuscript.

      1. Authors should clearly state the drift correction procedure of 3D STORM data. What are the localization precision and photon count for 3D STORM experiments?

      We processed 3D dSTORM data in NIS-elements AR software. We used the automatic drift correction from the NIS-elements software that is based on the autocorrelation. We will provide further and updated information in the revised manuscript, including the localization precision and photon count for the new dSTORM images.

      1. "Click labeling of NaV1.6 channels in living primary neurons" What kind of primary neurons have been used for click labeling of NaV1.6 channels? Is there any specific reason why authors have chosen cortical neurons for labeling NF186? Does this labeling strategy depend on primary neuron type?

      For the establishment and click labeling of Nav1.6 we used primary rat cortical neurons (Fig. 03, Fig. 06). The same neuronal type has been used for click labeling of NF186 (Fig. 02). We established labeling of the AIS components in cortical neurons because we use those routinely in the laboratory. However, this labeling strategy does not depend on the neuronal type. As we show in Fig. 05, to study localization of the loss-of-function pathogenic Nav1.6 variants we used mouse hippocampal neurons. The reason for this is that in previous study the same neuronal type was used to characterize these two mutations (lines 361-362). This demonstrates nicely that method can be easily transferred to any neuronal type. Furthermore, we were also able to label Nav1.6 and NF186 in mouse cortical neurons (data are not shown in the manuscript). We will clarify this in the revised manuscript.

      Reviewer #5

      1.Throughout the manuscript, only one representative image containing one AIG is shown for each condition without statistics and quantifications, so the conclusions are not sufficiently convincing. For example, in Fig. 1b, c, e; Fig. 2b,c,d,e; Fig. 3b,c,d,e ; Fig. 5c; and Supplementary Fig.1-6, the authors should quantify the average fluorescence intensities both for HA immunostaining and ATTO488-tz labeling in different conditions, as well as the labeling ratios (fluorescence intensity ratios between ATTO488 and AF647/AF555) . Without statistics and quantifications, it is unclear whether there is any significant difference between the constructs with different TAG positions, or between different transfection methods (e.g., lipofectamine 2000 vs 3000).

      We agree with the reviewer that the quantitative analysis is important and we will provide more quantitative data in the revised manuscript. At the same time, we are a bit confused by this comment which seems to refer to missing quantifications in one of the schemes (Fig. 1) and overlooks existing quantifications (e.g. quantitative analysis of the data set from Fig. 5c is shown in Fig. 5d). However, as suggested by the reviewer and to strengthen our data, in addition to the quantifications already provided in the manuscript (e.g. Fig. 2d: AIS length of NF186TAG constructs; Fig. 3f: AIS length of Nav1.6 TAG constructs; Fig. 5d: click-labeling intensity of LOF mutants), we intend to quantify the differences between labeling ratios of different mutants and transfection methods. When it comes to the different transfection methods, some data is already provided in the manuscript (e.g. we counted number of transfected versus transduced neurons) but we intend to clarify and expand on this in the revised manuscript.

      1. The only quantification done was for the average AIS length, but the statistical tests should be performed between different conditions and the corresponding P values should be provided. It seems that the transfected neurons generally have a longer AIS length than the transfected neurons (Fig. 2d and 3f). Could the authors provide an explanation for this?

      We are a bit confused by the first part of this comment. We measured the AIS lengths of NF186 WT or NF186 TAG as well as Nav1.6 TAG and compared it to the AIS lengths of surrounding untransfected cells (Fig. 2d and Fig.03f). In addition, we compared the AIS lengths of the NF186 WT and TAG to each other, and Nav1.6 TAG to each other. To analyze the differences, we performed statistical tests and provided the corresponding p values in the figure legends (Fig. 02 and 03). Further details on the statistical analysis are provided in supplementary tables (Suppl. table 01 and 02). Regarding the 2nd question, we have also noticed that the AIS lengths of transfected neurons appear longer than those of untransfected cells. This seems to be more pronounced in the case of NF186 which is expressed at the higher level compared to the Nav1.6. The appearance of slightly longer AIS is most likely the consequence of the fact that recombinant constructs are overexpressed in the neurons that express endogenous NF186 and Nav1.6. However, this difference in the AIS length is not significant to the controls. We will discuss this further in the revised manuscript.

      1. The authors claim that there was no obvious difference in the nanoscale organization of the NaV1.6WT 317 -HA or NaV1.6TAG 318 -HA channels (Fig. 4. e-g), but it is hard to conclude this without any quantification and statistical analysis. Sodium channels have been shown to be associated with the membrane-associated periodic skeleton structures in neurons and average autocorrelation analysis has been developed to quantify the degree of periodicity of such structural organizations (Han et al. PNAS 114(32)E6678-E6685, 2017). The authors should use this approach to quantify and compare the average autocorrelation amplitudes.

      We are thankful to the reviewer for suggestions on how to quantify the periodicity of recombinant sodium channels and how to more accurately compare WT and TAG mutants at the nanoscale level. We will perform additional experiments and analysis in order to address the concerns of this and other reviewers.

      1. The authors should also obtain dSTORM images for the click labeled neurons to demonstrate if the click labeling method would provide sufficient labeling efficiency for dSTORM, compared to immunostaining (HA and Ankyrin G immunostaining).

      We would like to thank the reviewer for this suggestion. We have already shown in our previous work that STED can be performed with click labeled neurons (PMID: 35031604). When it comes to this manuscript and AIS labeling, we have already obtained preliminary dSTORM images of click-labeled NF186. Since the expression of Nav1.6 is lower compared to NF186, the labeling is also less bright and dSTORM is a bit more challenging. To try to overcome this issue, in addition to dSTORM of click-labeled Nav1.6, we are planning to try click-PAINT (PMID: 27804198). Click-PAINT has been used for super-resolution imaging of less abundant targets in cells and could possibly allow super-resolution imaging of Nav1.6. We will report on these new experiments in the revised version of the manuscript.

      1. It seems that the click labeling has a off-target/background labeling in the soma of the neuron (see Fig. 3c,d. Could the authors quantify and determine the sources of such off-target labeling?

      We thank the reviewer for pointing this out. We will clarify this in the revised manuscript, but by looking at the other examples from our dataset it appears to us that this background is present in WT constructs as well. In the current version of the manuscript, this is not clear since the WT image that is shown in the Fig. 03b is a single plane confocal image. Therefore, we will replace it in the revised manuscript with a z-stack in which the presence of the background is more obvious (due to the maximum intensity projection). In addition, we will conduct additional control experiments to clarify this.

      Minor comments:

      1. The authors should indicate how many replicates were performed and how many cells were analyzed for each experiment.

      We thank the reviewer for bringing this up. By mistake, we omitted this important information. We will include this information in the revised manuscript, but we would like to highlight here that each experiment was repeated at least 3 times.

      1. The display range (i.e., intensity scale bar) was indicated only for a small portion of the fluorescence images. It is better to be consistent and show the display range for all images presented.

      We will include intensity scale bars in all the images in the revised version of the manuscript.

      3. Description of the revisions that have already been incorporated in the transferred manuscript

      Not applicable.

      4. Description of analyses that authors prefer not to carry out

      Reviewer #3, comment #5. One application presented in this manuscript is to evaluate the effect of epilepsy-causing mutations of Nav1.6. By comparing the intensity of ATTO488, the result suggests that there is no significant impact of these mutations on membrane tracking. I am wondering if the author should study the membrane tracking by also looking at the diffusion in live-cell with the labeling method. The comparison of the intensity only can be achieved by just immunostaining. It doesn't really demonstrate the benefit of live-cell labeling and imaging with the presented method.

      Generally speaking, one of the advantages of click labeling is its compatibility with live cell labeling. As the reviewer also points out, this is especially useful for live-cell imaging but is not limited to it. In addition, click labeling allows selective labeling of membrane population of Nav1.6 in living neurons. We took advantage of this and used cell-impermeable dyes to label unnatural amino acids incorporated into extracellular part of Nav1.6 (Scheme 03a). On the contrary, HA tag that allows immunodetection of recombinant Nav1.6 is added to the intracellular C terminus. Hence, by anti-HA immunostaining total (intra- and extracellular) epilepsy-causing Nav1.6 channel population will be detected. That is why in this case live-cell click labeling was advantageous compared to the conventional immunostaining. We will clarify this in the revised manuscript. In addition, we would like to note that when we started the experiments with the epilepsy-causing mutations, we wanted to a) check if they are present on the membrane and b) depending on the outcome of those experiments follow the trafficking of these LOF Nav1.6 mutants. Since patch clamp recordings of pathogenic Nav1.6 showed loss of Na+ currents, we at first assumed that they are not properly expressed on the membrane. However, our click labeling showed that the pathogenic channels were detected at the AIS membrane despite the loss of Na+ currents. This was also somewhat surprising to us and we would love to investigate this further. We also appreciate the reviewer’s suggestion in this regard and we hope to be able to use all the advantages of our labeling approach in our follow-up studies. However, keeping in mind time and resources limitations, live-cell trafficking study might be beyond the scope of this revision.

    2. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #3

      Evidence, reproducibility and clarity

      The manuscript by Stajkovic et al the describes step-wise generation and validation of the fluorescent labeling of NF186 and Nav1.6 in primary neurons by non-natural amino acid and click chemistry. For each protein of interest, the authors started by generating constructs carrying amber codon at different positions, and then selected for the best construct(s) by judging (1) the labeling efficiency, (2) whether the particular labeling position affect the function of the protein, and (3) whether the labeled protein shows any mislocalization. During the trouble shooting process, the authors also introduced adeno-associated viral (AAV) vectors for more efficiently delivering constructs into the cells. The method described in the manuscript could become a reference for researchers who aim to label similar neuronal proteins.

      Specific comments:

      1. There is some patch-like background from the 488 channel from the click reaction, some of which have very as strong signal as the staining on the neurons. What is the potential cause for this? With immunostaining on HA, the background doesn't affect too much on the image data interpretation. However, the major goal of this method development is to use it in live-cell without immunostaining. Without another reference, the high background might cause issues in data interpretation. Can the author also suggest way to avoid or lower this in the discussion?
      2. For the dSTORM analysis of the tagged Nav1.6 protein, I also cannot tell there is periodic organization from the image directly. Some analysis is needed there.
      3. The authors use the AIS length as a parameter to evaluate the function of the clickable mutant of NF186, and using patch clamp for functional validation of the clickable mutant of Nav1.6. In both cases, the comparison is done between the mutant and the WT construct, but both in transfected cell and exogenously expressed. It's also worth comparing with untransfected cells as the true native situation.
      4. It is unclear, for all the presented data, whether all the cells are collected from a single biological replicate or from multiple replicates. At least 2-3 replicates are needed to see the reproducibility in terms of labeling efficiency, and other related conclusions.
      5. One application presented in this manuscript is to evaluate the effect of epilepsy-causing mutations of Nav1.6. By comparing the intensity of ATTO488, the result suggests that there is no significant impact of these mutations on membrane tracking. I am wondering if the author should study the membrane tracking by also looking at the diffusion in live-cell with the labeling method. The comparison of the intensity only can be achieved by just immunostaining. It doesn't really demonstrate the benefit of live-cell labeling and imaging with the presented method.

      Significance

      The data itself is mostly convincing, however, I do not see much novelty from this manuscript. Both the labeling method using non-natural amino acid and click chemistry and AAV delivery are established. However, I can see that for research groups who specifically interested in studying these two proteins or proteins closed related, the results from this manuscript could be of direct help.

    1. some people will 01:52:34 read nagarjuna as allowing for the existence of true contradictions that something can be both true and false at the same time and uh graham priest is a philosopher who has a 01:52:46 uh reading of nagarjuna as under his uh dialethis logic which allows for certain uh contradictions to be true um [Music] i don't think that actually works in the 01:52:58 case of i think nagarjuna seems to presume the principle of non-principle of non-contradiction in order to run these kinds of reduction reductio absurdum type arguments um by drawing contradictions and incoherencies within 01:53:11 a given concept under analysis and then showing how it leads to contradiction so we should reject that concept um uh yeah do you have any thoughts about uh about 01:53:23 you know quantum physics is is sort of notorious for seeming to violate basic laws of of logic like say the law of non-contradiction or law of excluded middle or uh and so on and 01:53:35 so do you think that um our conventional logic you know it's like say classical logic is uh in if if there is no ultimate reality for madhyamako or for your your 01:53:48 understanding of uh quantum physics slash medium um then should the tools of classical logic what are the tools within conventional discourse broadly speaking as well for um 01:54:01 capturing um what madhyamaka is saying or what quantum physics as you understand it are saying so yeah let me answer specifically um uh 01:54:13 nagarjuna uh main negotiations from one perspective can be viewed as a logician right i mean it's a it's it's a his way of presenting things 01:54:25 uh uh it's it's a characteristic of somebody who's uh who's a legitimation you use logic uh but from from where's the perspective the first first of impact it sounds strange because uh his main tool is the 01:54:38 tetra of course which um somehow uh presents uh the impossibility of four alternative one being a something i don't know time exist uh one being non-a say time does not 01:54:57 exist and the third being um neither a nor not a and the fourth is uh both a and known a so it seems that wait a moment uh we we 01:55:09 we we are talked in logic 101 um that uh uh either a or not a and there is um beginning of logic so it seemed to be a clash here uh my 01:55:23 impression that there's no clash is that the known of non-a is not the same known as uh um as they restotelien known and we can uh we can think of innumerable uh everyday experience in which this whole 01:55:36 possibility it's exactly what we would uh we would consider so the exhaustive thing is the four there's four possibilities i don't want to go technically specifically but so it's not a an alternative logic here it's just a 01:55:48 different way of using known um so i don't see any clash between what we call logic uh in in in it's an interesting articulation but not 01:56:00 any any club it's not a mag logic um the same is true with quantum mechanics uh people been arguing that we can understand quantum mechanics by changing the logic i find it yeah but i find it 01:56:13 it's not really particularly clarifying um it's true i mean the particle doesn't go here normal goes there so if we think of these are two alternative quantum mechanics can be thought of can be 01:56:25 phrased if an alternative logic but all the alternative logic that i found they can be rephrased in terms of logic with different definitions so i don't i don't i don't think that this is the point um that's this is this is the 01:56:38 answer to your your question about logic you know the uh mutha madhyamakar karika his main treatise which we're talking about nagarjuna's text um 01:56:50 it's very short as you mentioned carlos and some of the things that are not there that are not written that are implied and also make it such a difficult text to understand is that he's refuting many different schools 01:57:05 of understanding an essence in reality and so when he does the tetralemma one of the usages is to be complete in terms of all the different you know 01:57:17 traditions or schools that are claiming some essence in reality to refute them and some do say that there's nothing you know not neither alternative and some say things 01:57:29 do exist and do not exist the both so i think he's using that more pedagogically if you will to um to refute all possible understandings 01:57:40 of an intrinsic existence and that's some of the beauty of his work and it's some of the difficulty in understanding it because you know unless you're really well read and really 01:57:53 understand fully all the different positions uh you it's hard to really know what he's doing at any one time um i could comment on this because it could be interesting um 01:58:08 so there is this uh sense in which barry explained that uh somehow answering 12 possible counter arguments at the same time and there's also a very simple way that you can see that this is not really 01:58:20 about a different logic so take the double slit experiment in quantum mechanics what's the point there that you try to explain a certain set of experimental data 01:58:32 by saying where does the particle go does it go through slit a does it go through slit beam let's go through both does it not go to neither and none of these four possibilities explains what you're seeing on the 01:58:45 screen so what do you do there it's not that you've reached the conclusion that everything is wrong is that you uh throw away the presupposition what was the presupposition that the particle 01:58:56 is somewhere so this straightforward use of logic it seems to me that i don't see any [Music] weird logic going on there yeah 01:59:08 you also throw away the the notion of a particle then if particles are that which have to be somewhere no you throw away the doctor there is an intrinsic reality that's what nagarjuna does if you continue doing that then you throw away 01:59:22 everything i i don't agree with uh personally if you ask me i agree that there is no interesting reality um [Music] in the sense that whenever you assume 01:59:37 such a thing you're going to fall into contradictions

      This question regards the use of logic by Nagarjuna in his tetralemma and parallels in quantum mechanics.

      Jay L. Garfield has some interesting and insightful observations about how Nagarjuna's logic works, and it relates to the different types of experiences where such statements could make sense.

      https://hyp.is/go?url=http%3A%2F%2Fdocdrop.org%2Fvideo%2FHRuOEfnqV6g%2F&group=world

    2. i was particularly struck by the fact that barry didn't say the mind is this this and that okay barry said well the mind is many things 01:19:18 uh look there's this and this this and this and there's a sort of layers also in some sense in which we can talk about it or or have some understanding partial media 01:19:30 understanding about it some wisdom about it and this layering i find it it's uh absolutely brilliant from my perspective 01:19:43 uh because it it dissolves the wrong question which is what is the mind period what is the thing which is the mind here is the thing which is mine uh let's just 01:19:55 define it characterize it and understand what it is that's a wrong that's a wrong way of thinking about it it's when we say when we think about our mind of course we think something you you unite somehow 01:20:09 it's the set of processes that happen into me and it's about my thinking my emotions but it's not one thing it's a complicated layer there's many layers of discussion possible about 01:20:21 that i don't want to enter into the specific but i found this fascinating and let me go to time immediately because uh it's it's deeply related i got the book of time which is a um 01:20:34 the audio of time in which i carlo this carlo is very timely because we're also kind of running low on time absolutely absolutely 01:20:46 and and and in the book i sort of uh try to collect everything we have learned about time from science from special activity from generative statistical mechanics from other pieces and and what we 01:21:00 tentatively uh learn about time with quantum gravity which is my uh specific field once again you have to sort of uh put your hands on the notion of time and the main message of the book 01:21:12 in fact the single message of the book is that the question of what is time is a wrong question because when we think about time we think about the single thing okay we think we have a totally clear idea about time time is a single thing 01:21:25 that flows from the past to the future and the past influence the president the president of the future in the present this is how things are the reality of the present entire universe is a real state in that and we learn from science that this way 01:21:37 of viewing times is wrong it's factually wrong okay it's not true that uh we all proceed in in in together from 01:21:48 moment a to moment b and the amount lapse amount of time lapses between a and b is the same for everybody and so on and forth because we learned from from experiences especially activity generativity statistical mechanics and 01:22:01 other things so the way to think about time is that it's a very layered thing but with this thing we call time is made by layers um conceptually and when we look at larger 01:22:14 domain the one of our usual experience some layers are lost so uh some aspects some some properties of what we call time are only good 01:22:25 uh are only appropriate for describing the temporal experience we have if we don't move too fast it doesn't look too uh to to to too far away if you don't look at the atoms too in detail as a single 01:22:38 degrees of freedom and so on so forth so the notion of time opens up in a in a in a set of layers which are become increasingly 01:22:53 uh general only if you go down to the bottom level um some aspect of time like the universality of time uh uh only makes sense if if we don't go too 01:23:06 fast velocities for instance um so this is a similarity and that's why the the opening up of what the mind is into layers seems to be uh 01:23:19 the right direction to go right when if if i ask uh does a cat has a mind or does a fly has a mind it seems to me that the only answer is uh to get out of the idea that the 01:23:31 answer is either yes or no i mean i i suppose that certainly a cat has a certain you know a sleepy feeling in the morning and the moment of 01:23:43 joy when he sees his fellow cats but i suppose a cat doesn't go through a complicated intellectual game of trying to understanding what is reality and debating about that so there is some aspect in common uh either not break up 01:23:56 this this notion in in pieces once again uh i mean the the topic is what is real uh 01:24:08 if we start by saying time is real it's a beautiful chapter why you cannot say that time is an intrinsic existence uh we just get it wrong if we think well then atoms are real or the mind is real 01:24:21 all these answers we got it wrong we can say that things are real in a uh in a conventional sense within a context within a within a um 01:24:37 and and then we when we try to realize what you mean by uh something is real this is certainly real in a conventional sense but we realize that um reality the reality of this object 01:24:49 itself it gets sort of broken up into interdependence between this object and else and its different layers 01:25:02 and and that's the reality that as a scientist i can deal with not the ultimate reality the the conventional reality of course conventional reality is real as uh perry 01:25:15 was saying this is not a negation of reality uh it's a it's a it it's a freedom from the idea of the ultimate reality uh 01:25:27 the ultimate uh sort of intrinsic inherent reality being there on which in terms of which building progress

      Carlo resonates with Barry's layered explanation of mind from the Buddhist perspective. The mind is not some simplistic entity. Carlo wrote a book on time and he applied this same layered thinking. Time is different in different circumstances. It acts one way at the quantum level, another at the microscopic, another at our human level, and another at the galactic level.

      In a sense, we tend to make the same type of category errors whether it is our experience of time, space or experience in general. We overgeneralize from an anthropomorphic perspective. A large part of Jay L. Garfield's argument of cognitive illusions and immediacy of experience rests on this fact.

      https://hyp.is/go?url=http%3A%2F%2Fdocdrop.org%2Fvideo%2FHRuOEfnqV6g%2F&group=world

      Opaque mechanisms operate in both our sense organs and our mental machinery to give us this illusory feeling of immediacy of the sensed or cognized object.

      Uexkull's umwelt experiments on the snail as explained by Cummins are consistent with Carlo's perspective on time.

      https://hyp.is/go?url=http%3A%2F%2Fdocdrop.org%2Fvideo%2FG_0jJfliUvQ%2F&group=world

    3. when we die we go through eight stages according to the buddhist understanding and each of those stages the first four the elements the sort of solidity if you will i we know they're 01:16:07 not solid but from a conventional perspective the solidity elements the liquidity elements the thermodynamic elements the movement the kinetic elements those all dissolve as we die in 01:16:19 the first four and when that fourth one happens there's no more circulation of blood or of air so we don't breathe we have no circulatory you know blood pressure so we're declared clinically dead but 01:16:30 there's four more stages we go through and those are when the mind becomes successively subtler and those are when we get into the non-dual minds that are the most subtle minds and the last 01:16:43 eighth stage it's called worser in tibetan and we translate that as luminosity or clear light it's not light it's not you know but it's the most utter clear clear mind 01:16:57 and that mind if it goes on if we don't die if we meditate on that luminosity and sustain it through our meditation infinitely we can become a buddha and that's why the buddha is 01:17:09 sometimes called a buddhism an enlightened buddha is a deathless state because you don't actually die so those would be the non-conceptual and non-dual minds and just for completeness 01:17:23 those last four minds are called these are technical terms so it won't make much it won't have much give you much understanding white appearance red increase black near attainment and then this worst air this 01:17:35 luminosity so that's kind of the the the road map if you will for for mine and it's not the brain now on the gross level of thinking in our sensory minds there's a very close 01:17:48 connection with you know meant with the brain okay but when you die the brain is supposed to be dead and you're still alive okay and so these more subtle minds 01:18:01 are not related actually to the brain so we could really say that mind is experience it's awareness it's knowing not knowing something but 01:18:12 the act of knowing so the qualities of mind the most important qualities are awareness and clarity so that gives you just some rough idea of the buddhist understanding of mind or consciousness

      Barry gives an explanation of the different levels of mind as the body undergoes death, and particularly, the last 4 of 8 progressively subtler states of mind that are nondual, and therefore, not considered as part of the brain.

    4. the question you were asking was what is mind or consciousness so here we're using the words synonymously um and from a buddhist perspective uh there are 01:11:50 six what we call primary minds and then there's a whole slew of secondary minds and some of the more common systems include 51 in the secondary minds now please understand that mind like 01:12:04 everything else that exists in the world doesn't exist permanently it exists there are a few exceptions okay but essentially everything that exists in the world um is not permanent therefore 01:12:18 it's changing moment to moment therefore everything exists as a continuum including mind so that means there'll be a moment of mind followed by a next moment of mind etc 01:12:31 and the next moment of mind is determined primarily but not solely by the previous moment of mind so from that we can extrapolate a continuum an infinite continuum and mind is an 01:12:43 infinite continuum from perspective of buddhism and that means that we've had that implies suggests rebirth and it suggests we've had ultimate we've had infinite rebirths there's been no beginning 01:12:56 and so this then comes up again with the notion of a beginning creator if you will a so-called you know god there are some some problems here to resolve this um 01:13:07 and so mind is a continuum it's infinite now each moment of mind is made up of a primary mind and a constellation of secondary minds these six primary or the five as you read from nagarjuna the five 01:13:22 sensory minds of seeing hearing smelling tasting touching tactile right these five plus what's sometimes called the mental consciousness and that has live different levels of subtlety on the 01:13:34 grossest level is thinking if we go a little bit deeper a little bit more so little subtler we have dream mind which seems like these senses are active but actually 01:13:46 when we're sleeping the senses are inactive so it's just something coming from our sixth or mental consciousness it seems like the senses are active in dream mind that dream mind is a little more subtle than a wake mind awake 01:13:59 thinking mind and then if we go more subtle we're talking now again about awake mind we we talk about intuition when we're in intuition we're not thinking right it's a non-conceptual 01:14:11 mind uh in that sense and deeper yet our minds we call non-conceptual and non-dual where there's no awareness of a subject or an object so subject object non-duality so 01:14:25 that's kind of the rough sort of you know lay of the land

      Barry provides a brief summary of what the word "mind" means from a Buddhist philosophy perspective and says that there are six primary minds and 51 secondary minds.

      The 6 primary minds are the 5 senses plus mental consciousness, which itself consists of the coarse thinking (conceptual) mind, the intuitive mind (these two could be roughly mapped to Daniel Kahnaman's fast and slow system respectively), as well as the dreaming mind.

      Barry also conveys an interpretation of reincarnation based on the concept that the mind is never the same from one moment to the next, but is rather an ever changing continuum. The current experience of mind is GENERALLY most strongly influenced by the previous moments but also influenced by temporally distant memories. This above interpretation of reincarnation makes sense, as the consciousness is born anew in every moment. It is also aligned to the nature of the Indyweb interpersonal computing ecosystem, in which access to one's own private data store, the so-called Indyhub, allows one to experience the flow of consciousness by seeing how one's digital experience, which is quite significant today, affects learning on a moment to moment basis. In other words, we can see, on a granular level, how one idea, feeling or experience influences another idea, experience or feeling.

    5. there's a crucial distinction between what barney called three and four that's what uh captured me so 01:08:55 if you take the mind as fundamental as existing the only existing thing where where the the movie of the world is reflected into i am not happy 01:09:08 my my culture uh rejects then as a useless point of view to do science that's what but there is an alternative much more interesting and i find much more 01:09:21 deep in which which i read in a garage you know which is what uh barry seems to be is calling the fourth alternative in which the mind is not the fundamental thing in which everything is it's 01:09:32 reflected it's just one part of this uh uh uh interdependence now namely it's not the things that not intrinsic existence but mind has intrinsic existence that's not the 01:09:45 the the there's a more interesting answer namely that mind itself has no intrinsic uh uh existence uh and so it's just uh uh 01:09:57 it has an existence but is is it of course it's an existence my mind exists and i exist but uh and and and and if i think in terms of groups to say i mean all sentience being or all 01:10:10 human beings whatever um together uh which is an ideal also some some some some western philosophy that you know um it's collectively that through language and 01:10:22 that would create a vision of the world but i want to think of this as one aspect of the ensemble of things which is existence where uh uh nothing of that has um 01:10:36 uh has intrinsic existence so i want to think about my mind it's my brain my sensation my all my my my love people loving me the the image that people have of me my instead of the set 01:10:48 of processes uh uh which part of the world and it seems to me that the belgian allows me to think at me as part of the world at the same sense of the same ground as the world being 01:11:01 reflected in my consciousness without having to choose one of the two perspective to be the true one the intrinsic existence um 01:11:12 all all perspectives are uh uh empty they're all good but they are um they are not the the one on which the rest is ground they 01:11:24 each of one i can understand dependently on something else so marios you read a a verse or two from the third chapter of nagarjuna and uh let me comment on that

      Carlo points out the view he now holds, influenced by Nagarjuna's philosophy, that the mind exists, but does not intrinsically exist.

      So he argues on one (conventional) level, his mind and all other minds exist.

      Agreeing with Barry's fourth suggested alternative. The mind is not the fundamental thing, but is just ONE PART of this interdependency. Each view, whether of any human or even non-human is empty but conventional exists in interdependence of many causes and conditions.

      From Stop Reset Go perspective and the Indyweb, a web3 technology that can embody each indivdiual's perspectival knowing through the establishment of their the individuals unique and privately owned data repository can enhance the discovery of the process of emptiness. How? By theoretically having all one's (digital) interactions of the world, one can begin to see in granular detail how one learns about the world and begin to sense the flow of the mind. Through repeated use of the Indyweb and witnessing how one forms new ideas or reforms old ones, the indyvidual becomes increasingly aware of oneself as a process, not a thing. Furthermore, one begins to see self knowledge as hopelessly entangled with cultural and social learning. One begins to sense the 4Ps of propositional, perspectival, participatory and procedural learning, also entangled with each other and with individual/social learning.

      https://docdrop.org/video/Gyx5tyFttfA/#annotations:vkOUgv8rEeypE39kg2ckCw https://hyp.is/go?

      Quick John Varvaeke interview on 4P: url=http%3A%2F%2Fdocdrop.org%2Fvideo%2FERdJDVdbkcY%2F&group=world

      One especially begins to sense perspectival knowing and situatedness and that causes and conditions unique to one's own worldview constructs one's relative reality.

    6. let me comment on your quantum physics i have only one objection please i think it's uh uh it's 01:01:21 what you said about the two uh sort of prototypical uh quantum puzzles which is schrodinger the double slit experiment uh it's uh it's perfect um my only objection is that in my book 01:01:34 i described of course i had a chapter about schrodinger cat but i don't use a situation in which the cat is dead or alive 01:01:46 i prefer a situation in which the cat is asleep or awake just because i don't like killing cats even in in in in mental experiments so after that 01:01:58 uh uh replacing a sleep cut with a dead cat i think uh i i i i completely agree and let me come to the the serious part of the answer um 01:02:10 what you mentioned as the passage from uh the third and the fourth um between among the the sort of the versions of 01:02:25 wooden philosophy it's it's exactly what i what i think is relevant for quantum mechanics for this for the following reason we read in quantum mechanics books 01:02:37 that um we should not think about the mechanical description of reality but the description reality with respect to the observer and there is always this notion in in books that there's observer or there are 01:02:50 paratus that measure so it's a uh but i am a scientist which view the world from the perspective of 01:03:02 modern science where one way of viewing the world is that uh there are uh you know uh billions and billions of galaxies each one with billions and billions of 01:03:14 of of of stars probably with planets all around and uh um from that perspective the observer in any quantum mechanical experiment is just one piece in the big story 01:03:28 so i have found the uh berkeley subjective idealism um uh profoundly unconvincing from the point 01:03:39 of view of a scientist uh because it there is an aspect of naturalism which uh it's a in which i i i grew up as a scientist 01:03:52 which refuses to say that to understand quantum mechanics we have to bring in our mind quantum mechanics is not something that has directly to do with our mind has not 01:04:05 something directly to do about any observer any apparatus because we use quantum mechanics for describing uh what happened inside the sun the the the reaction the nuclear reaction there or 01:04:18 galaxy formations so i think quantum mechanics in a way i think quantum mechanics is experiments about not about psychology not about our mind not about consciousness not 01:04:32 about anything like that it has to do about the world my question what we mean by real world that's fine because science repeatedly was forced to change its own ideas about the 01:04:46 real world so if uh if to make sense of quantum mechanics i have to think that the cat is awake or asleep only when a conscious observer our mind 01:05:00 interacts with this uh i say no that's not there are interpretations of quantum mechanics that go in that direction they require either am i correct to say the copenhagen 01:05:14 school does copenhagen school uh talk about the observer without saying who is what is observed but the compelling school which is the way most 01:05:27 textbooks are written uh describe any quantum mechanical situation in terms okay there is an observer making a measurement and we're talking about the outcome of the measurements 01:05:39 so yes it's uh it assumes an observer but it's very vague about what what an observer is some more sharp interpretation like cubism uh take this notion observer to be real 01:05:54 fundamental it's an agent somebody who makes who thinks about and can compute the future so it's a it's a that's that's a starting point for for doing uh for doing the rest i was 01:06:07 i've always been unhappy with that because things happen on the sun when there is nobody that is an observer in anything and i want to think to have a way of thinking in the world that things happen there 01:06:20 independently of me so to say is they might depend on one another but why should they depend on me and who am i or you know what observers should be a you know a white western scientist with 01:06:32 a phd i mean should we include women should we include people without phd should we include cats is the cat an observer should we fly i mean it's just not something i understand

      Carlo goes on to address the fundamental question which lay at the intersection of quantum mechanics and Buddhist philosophy: If a tree falls in the forest, does anybody hear? Carlo rejects Berkeley's idealism and states that even quantum mechanical laws are about the behavior of a system, independent of whether an observer is present. He begins to invoke his version of the Schrödinger cat paraodox to explain.

    7. but before we do that let me talk about something that's even more fundamental um and helps us to understand the progression of thinking through those four schools to the what's 00:42:10 usually considered the most sophisticated in my jamaica school um and that is the distinction which is really important between existence and intrinsic existence 00:42:23 and the ex and the distinction between no existence and no intrinsic existence so this is these distinctions um if one doesn't fully comprehend the the 00:42:37 majamika system uh not fully comprehend but have some idea of the of the uh my jamaica system one then usually make is not able to make these distinctions so 00:42:49 let's talk about them for a moment um so existence um we when we talk about existence we talk about our ordinary understanding of what's real okay that things are 00:43:03 objects uh things are you know they may be in relationship but what's in relationship are two different distinct objects or entities that are in relationship and that's kind of our normal understanding of existence 00:43:15 so lacking inherent existence or intrinsic existence begs the issue to understand what is intrinsic existence okay and that's the 00:43:27 object of negation for the buddha for nagarjuna and for all those following in this tradition of nagarjuna the uh the majamika school and so 00:43:39 that's not so easy to wrap our heads around uh what is intrinsic existence in a way it's so close that we miss it you know it's it's a little bit like you know 00:43:51 staying in a in a new hotel room in a new city waking up and looking for your glasses and you can't find them and then realizing that they're already on your faces and so 00:44:05 intrinsic existence is things existing independently things existing uh through relationship um things not not things existing dependently not in independently 00:44:19 and so if we look at dependence now we can look at that at several levels and the more obvious levels you've mentioned that carlo is cause and effect causality okay but there are also more uh 00:44:33 subtle levels of dependence that the buddha and nagarjuna talk about and are real central to the philosophy so the second level is the relationship between whole and parts and parts to whole it 00:44:46 goes both ways okay that's a a a little bit you know another level if you will of of dependence uh in the particularly you know highlighted by nagarjuna and 00:44:58 then the third level which is the most uh subtle level the subtlest level which is really what we have to start to understand because the opposite of that is this independent or intrinsic 00:45:10 existence okay so this third level we call dependence through designation or sometimes called dependent designation but it's dependence through designation 00:45:22 it's a type of naming or labeling so for example barry we label or name barry my parents gave this name to barry based on a body 00:45:34 okay maybe a little tiny infant body at that time right and also uh in terms of maybe some kind of behaviors or you know how they thought this emotional structure is for this little baby right 00:45:47 he's very calm or he's very you know he's acts out a lot he's very active or you know all those things so upon all that a name is placed in this case barry okay 00:45:59 so that relationship of you know dependence through designation is really what nagarjuna is talking about when we talk about dependence um and so that's very uh 00:46:11 important to understand so the opposite of that coming back to understanding this inherent or intrinsic existence there are many words in english we use synonymous for 00:46:23 ranging not existing intrinsically or inherently or independently or from its own side those are all synonyms um to the tibetan 00:46:36 terminology that i just mentioned um so when people don't have a good appreciation for intrinsic existence and you say then so the second there were two comparisons 00:46:53 the second comparison is uh non-existence and not inherently existent so when when when when regarding says no inherent existence what often people interpret is no 00:47:07 existence at all and they fall into a nihilism that nothing exists at all so they haven't fully under appreciated this notion of um intrinsic existence so they're throwing the baby out with the 00:47:20 bathwater right when we're throwing out or negating uh intrinsic existence that they don't quite understand what that really means they think it's all of existence and therefore they you know think that nothing exists they throw the 00:47:33 baby out with a backlog so that's that's okay can i interject something before you go ahead and you you you promised us before uh the full schools before uh but but can i 00:47:44 can i make a comment here um of course about you to say because this is free flow so yeah yeah so we you know we gave the title uh 00:47:56 what is real to this uh to this i that seems to me um that's exactly that distinction that that you you made between existence 00:48:09 and intrinsic existence um inherent existence it's a it it's it's uh it's idea that that i found central and and and 00:48:22 essentially essentially useful for me for for the following reason first of all um i mean the notion of reality the notion of existence here are close i mean what what exists is what is real what is that i want to say a couple of things one is 00:48:40 that um we make a distinction with an illusory and real in our everyday life uh which it's well founded i mean if i if i see 00:48:53 the chair and there's a mirror there and i see a chair of the other side of the mirror there's a precise sense in which the chair in which the other side of the mirror is not real well this chair is real 00:49:06 um this distinction has a meaning because i can sit on the chair i can touch that one but i cannot sit on that and touch that one but 00:49:18 then we realize that some aspects of what is illusory in the chair in the mirror also are shared by the chair which i just called real which is also illusory in 00:49:31 some other sets um for instance uh the fact of being a chair uh it's uh cut out and back on so i missed you up until now please could you repeat it oh 00:49:44 uh for where for where did you be speak uh when you were saying this distinction between existence and inherent existence and non-existence non-inheritances is 00:49:56 very helpful uh and then after that i lost you yeah i wanted to um make a couple points one is that uh we use a distinction between illusory and real in everyday life for instance we say that 00:50:10 a chair but then i was saying of course then um through science uh we realized that there are illusory aspects in the chair which are just called real as well 00:50:30 but then one is tempted and that's um to say all right so there are many luxury aspects of that chair but there is a a more fundamental level in which uh 00:50:45 there is a description of what is going on there which is a real one and edinton uh made it very very vividly in a well-known uh distinction between the scientific table 00:50:57 and the everyday table when he says look i have two images two tables there there's a table of which i eat which is solid and then there's a table which i view with my scientific eyes which is made by atoms 00:51:09 uh and is not solid there's a lot of emptiness of of not emptiness negatives empty completely different sense i i've heard that that emptiness is 99.9 to the 12th 00:51:20 power based in the atom is that right yes yes but that's of course not negative emptiness that's just the lack of presence of atoms yeah um and adidas says and people use that 00:51:34 by saying the the the the chair of my uh the chairman which i see the solitude is illusory the real chair is the atoms uh this way of using the notion of real and the 00:51:49 notion of um of uh existence so what exists in the atoms uh is dangerously misleading that's what 00:52:01 i uh because uh it uh um it pushes us to try to resolve the relational and illusory aspect of reality that we see 00:52:15 in terms of some basic fundamental physical reality from which to derive it or in western subjective idealism 00:52:28 in terms and its derivation in terms of some sort of uh fundamental mind or fundamental subject which is a real existing entity 00:52:41 the cartesian mind that is certain of existing itself um or the kantian subject or even the the the fundamentality of the perception 00:52:53 itself in whosoever uh and in phenomenology so there is this western need to anchor um the uh what we mean by real or something final 00:53:07 so uh to to realize that there is dependence but then there is some basic grounds on which everything builds up on which to uh on which to sit and this is what i take emptiness 00:53:23 the notion of empty negative notion of emptiness to be useful uh to to get rid of this urge of finding beyond the uh 00:53:35 the illusory aspect of the world a a basic level which is not um uh real in in in the uh 00:53:47 in the sense of uh uh of of uh uh in which this chair is is real compared to the uh to the chair uh in the mirror but but really the fundamental way so the the the bottom line of the story the 00:54:02 the solid terrain on which to anchor the ultimate um uh uh the end point of the line of dependence the line of dependence ends to some point that's what is real 00:54:15 and and what is this nagarjuna is that that's the wrong question i mean uh it's not only that the chair the table is empty because i can understand it's something else but it's 00:54:26 also that something else is also empty because i can understand it's something else until the point in which there is this emptiness itself it's a it's empty because we shouldn't take it as a 00:54:40 as a fundamental sort of metaphysical principle on which to ground all the rest so this putting this this is yeah just putting this in slightly different 00:54:51 terminology emptiness is where it allows functionality emptiness is the lack of any kind of essence even on a you know atomic level and i agree with you what you said 00:55:04 that's i think very true um right and this is a look at when we look at the chair versus the reflection of the chair in the mirror it gets a little more complicated because both of them of course lack any 00:55:17 independent existence both okay they're both empty uh in terms of shunyata having said that the metaphor that the buddha used he gave about 10 different 00:55:29 metaphors for you know something to be illusory and one of the important ones that he used was reflection you know he used the reflection of the moon or the full moon in in the still 00:55:41 water that it looks like the moon but in fact of course it's not it's a reflection he used such things as water in a mirage sound of an echo and you know things 00:55:55 like that to illustrate okay now um let me mention two experiments if i may and you correct me where i'm wrong i'm a 00:56:07 pop physicist from the new york times okay um and one is the uh the thought experiment of ed edwin schroedinger okay the so-called shorting her cat paradox 00:56:21 or thought experiment and you have double steel box in which you have a cat there's no doors no windows right and you have a vial of very powerful acid that's 00:56:33 connected to a radioisotope the half-life of the isotope is the same duration as the duration of your experiment your thought experiment so the chance of the cat so if the radioactive material 00:56:46 decays 50 chance it you know somehow pulls a lever and the acid spills killing the cat if that radioisotope does not decay there's no spillage of the of the 00:56:59 of the acid and the cat remains alive so quantum physicists call this superposition where the cat is both alive and dead when you crack open this steel box 00:57:13 then um you observe what's inside and then the cat is either dead if the radio isotope you know decayed and knocked over the acid or 00:57:25 it's alive it didn't okay and it's it's either or whereas when you can't observe it it's both it's superposition okay second is the double slit you know you you shoot these electrons or photons you 00:57:40 know through two slits in a metal thing and then you have a screen behind and you look at the the pattern and if you have a little camera observation device at the slit level of the slits observing 00:57:52 you find a pattern below on the back on the screen that suggests what passed through the splits were particles whereas if you remove the observation device you have an interference pattern 00:58:05 suggesting what went through this list were waves okay so these two experiments at least in my very uh you know superficial understanding tell us that observer dependence is very 00:58:18 important in terms of reality okay that whether or not there is or isn't or or maybe you can what type of observer you know presence there is very much influences and determines what's real 00:58:31 and so that then uh jumps into the four you know buddhist schools of philosophy and if we go from the so-called least sophisticated up the third one would be the one you alluded to that's somewhat 00:58:45 similar to bishop barkley in the west and other idealists that say that everything is consciousness everything is mine and things that seem to be solid out there in an external reality are nothing more than projections of our 00:58:58 mind and that's actually a very sophisticated philosophy it's a very sophisticated philosophy one of the things it starts to do is it breaks down this notion of a solid external reality 00:59:10 okay but it's con it's it's critique as you have you also mentioned is that it takes the mind you know to be somehow you know uh absolute or ultimate you 00:59:22 know existing and so then the highest if you will most sophisticated school of mediumica says well what the chidoma modulus the mind-only school says that's correct up to a point but the criticism is 00:59:36 there's no uh you know absoluteness about the mind either so then you end up with that you accept an external reality you accept a mind but both you know that is every existent thing uh exists 00:59:49 without having any uh exist in relationship without having any independence or objectivity um and so that's very roughly the at least the the the last two of the three buddhist schools the 01:00:03 third one is divided again into prasannika madhyamaka and spatrontikamanjamaka using tibetan terms that are borrowing from the sanskrit um and the prasangika mud yamaka is considered the most 01:00:16 sophisticated where nothing at all has intrinsic existence the whereas the uh svaltronticom and yamaka they say that some uh conventional reality does exist uh 01:00:30 from its own side having some essence uh so there's a little bit of a distinction in the debate there um so just wanted to to mention those things i'd like you to comment

      Kerzin differentiates between existence and intrinsic existence. Intrinsic existence is what the Buddha and what Nagarjuna is trying to negate.

      Rovelli makes a good point about a prevalent attitude that science offers a truer perspective than common sense, while Nagarjuna is pointing out that even the scientific explanation is not the final one. For one thing, it implicitly depends on the existence of a reified self who is the ultimate solidified existing agent and final authority, which Nagarjuna negates with his tetralemma.

    8. and then the deepest level is what's maybe more relevant for our discussion 00:27:04 we call it we call it all pervasive it's a conditioned kind of suffering and it's conditioned by ignorance the ignorance that does not understand reality correctly okay and that's really 00:27:17 what we're that's the opposite of what we're talking about when we talk about you know shunyata or emptiness uh this ignorance is 100 180 degrees opposite um so in that last level of suffering is 00:27:32 really the underpinnings of all the other sufferings okay so if we can address and remove that level of ignorance then all the other sufferings fall away 00:27:43 and you could also see it as all of our attachments that get us into trouble our aversions that can end in anger hatred and now we see so much violence in the world all that and 00:27:55 and our selfishness and our greed and you know on and on all that just falls away when we begin to get rid of this ignorance and we begin to not only intellectually understand emptiness but 00:28:08 we put it into our lives it percolates down and it starts to be part of our attitude the way we think the way we feel it permeates every aspect of our sleep and wake life and when that happens it's 00:28:22 it's like a revolution i don't speak from personal experience because i don't have that but according to the great saints and masters who have it's like a total revolution it's full of joy it's complete love and compassion 00:28:36 there's no moment where there isn't and it's always bathed in this wisdom of emptiness taking things as you said so beautifully in the beginning of your remarks carlo that everything is 00:28:48 in relation there are no discrete entities at all there's no independent existence i think those are the words that were used by david bohm who was a very close friend of his holiest the dalai lamas and as 00:29:01 you know a well-recognized quantum physicist um so those are just some contextual opening remarks to put us in in the in the right ball field if you will uh as a 00:29:15 with an american background baseball field right could be a soccer field

      all pervasive ignorance is the ignorance at the deepest level. When that is removed, all the other more superficial levels of ignorance go away as well.

      There is no independent existence. Everything is in relation.

    9. i read that book 00:11:08 in the translation by garfield and it was a shock for me it's an incredible it's a it's just a fantastic book so it really blew away my mind and i 00:11:21 spent a while as a a summer uh immersed in that book trying to read everything i could get nakatuna and thinking about that and i ended up with two ideas um which i just would put on the table and to to 00:11:35 discuss one smaller one larger one smaller is that in nagarjuna there are some basic ideas which are helpful uh to make sense of about quantum mechanics not not because the cartridge 00:11:48 knew anything about quantum physics of course it didn't uh but i think that to do science we need we need ideas and philosophy is very useful and uh we get from philosophy uh 00:12:01 conceptual structure way of thinking uh that as usual to make sense of of of better ways of understanding about the world and and and what is useful in nagarjuna is the idea 00:12:14 of uh what you do for quantum physics is the idea that uh it's it's better to think of the world not as entities or substance or or things of god or 00:12:27 whatever matter uh that has its own properties but only um through the interdependence of of things so you don't understand anything by itself if not connected to the others 00:12:39 that's uh in fact it's even more i think what nagarjuna shows that uh if you think that their relations with you think that things affect one another that's the only way of thinking so the idea of 00:12:52 of a thing by itself of things existing uh independently of anything else of a fundamental reality it's uh it's not useful and it's 00:13:08 i think the guardian argues contradictory that's a that's a specific idea this is the bigger idea which i i found uh wonderful and that completely captured me 00:13:20 is that uh this is a for me fascinating philosophical perspective because it starts from the day of separating uh sort of a conventional reality and 00:13:31 and an ultimate ultimate reality uh which is very common uh in in a very common perspective also in science and western philosophy um 00:13:44 you can also read some of the evolution of science or western philosophy trying to search for this ultimate life is that matter is that god is that spirit is that the mind is that language is that there are many other circles is that phenomenology or 00:13:58 the whosoever whatever you want and nagarjuna the book of nagarjuna is not a positive construction it's a negative destruction every chapter takes away something look this by itself doesn't stay together 00:14:12 this there's a state together it takes away it takes somebody takes something and so the suggestion here is that maybe the question is wrong um we should look for the ultimate value the ultimate value doesn't exist in a 00:14:24 sense it's the same thing as a as a conventional reality that i found fantastic it's a it's a dissolving um it's dissolving a fake problem in a 00:14:36 sense and opening up a sudden uh uh coherence as i read it and with all my superficiality it's not denying reality right it is here i mean this pen is his pen 00:14:48 it it denies the fact that this is ultimate reality in this pen or in something on which this pen is based including the mind which is uh there's a superficial buddhism a view of 00:15:01 beauties in the west uh which is just everything is the mind the mind is it's everything is is uh if you think it's a self and everything you know um where you were born in hollywood so 00:15:12 the illusory aspect of the walls i wonder if you got it from buddhism from hollywood i don't know but this is this idea that you know the the world is a big cinema and everything is is in the mind of berkeley 00:15:25 and this is a there's a chapter in in in nagarjuna which denies that completely because the mind itself is not um it's not uh doesn't have an ultimate reality so you cannot found anything on the mind nor on the dharma or the 00:15:39 on anything up to the point and then i conclude so my reason of my fascination throwing my fascination from the guardian on the table in this passage about the view this uh 00:15:51 this comments the emptiness of emptiness which was the real moment in which the guardian captured uh captured me so it's a point in which uh you know translated the way i read it or probably superficially is that all 00:16:04 right so everything is empty in the sense of doesn't not having an intrinsic reality so therefore this emptiness is the foundation of everything and regardless of the words no no no wait this is this 00:16:16 is a this is the view uh which you which is itself empty in the sense that it depends on else this is suddenly extremely liberating i think uh and i found it it hadn't 00:16:31 impacted me intellectually suddenly i have a i have a way to take take away from my intellectual search and anguish finding the foundations uh which i find liberating and even 00:16:43 personally i mean it's thinking about myself not as an entity but as a combination of other things uh has definitely an effect on me on a on a on a human being 00:16:55 and uh so i guess what what finally fascinated me in a gardener is this anti-foundational aspect it's taking away the starting point um it's absolute radicality 00:17:08 uh when he says that nirvana and samsara themselves are sort of illusioned in some sense empty in his own sense devoid of intrinsic reality

      Rovelli explains what is so profound about Nagarjuna's teaching, that EVERYTHING, including all the statements Nagarjuna's himself makes, is empty.

    1. There is another solution tested by me and safe to use. just add to the real folder _ (example: foo become foo_) then simply delete your symbolik link, then remove _ from your true folder. ShareShare a link to this answer Copy linkCC BY-SA 3.0 Improve this answer Follow Follow this answer to receive notifications answered Dec 6, 2013 at 7:24 vcorpvcorp 38133 silver badges22 bronze badges 4 3 yeah, this is 100% safest solution after you know that powershell does not give a s**t about rmdir – test30 Jul 3, 2014 at 14:54 This is a clever precaution. +1 – Hanna Mar 4, 2015 at 17:11 Warning: I don't think this MIGHT not work on Win10 since it's fixing shortcuts upon renaming. (At least classic Shortcuts) Not tested though. – Hexo Sep 29, 2017 at 13:26 I did this just in case. After I renamed the target folder, the symbolink link failed when I tried to access it, so I could delete it without worrying. – Andrew Apr 14, 2018 at 0:39
      • SOLUTION!
      • ok, tested!
    1. to bring it to the present you know the news cycle is about russia and ukraine yeah there are propositional things there about what's the capital of 00:01:15 russia what's the capital of ukraine where's the border then you have sort of procedural um how how do you discuss this matter or or how do we make decisions about this what are the things in play and then you have 00:01:27 um question perspective like how does it look in moscow how does it look in kiev how does it look at the u.n yes and then you have something about participation what's it like to be at the border in the east of ukraine for example yeah 00:01:39 yeah what identities are you assigning what identities are you are are you assuming what roles right yeah yeah i mean it's not nothing immediately politicized i just mean to bring it to bear that in any given context there are 00:01:52 always these multiple ways of knowing that are a big thing

      Example of applying the 4 P's to something topical at the time of this video: The Russia / Ukraine war.

    2. so that's me trying to do a synoptic integration of all of the four e-cognitive science and trying to get it 00:00:12 into a form that i think would help make make sense to people of the of cognition and also in a form that's helpful to get them to see what's what we're talking about when i'm talking about the meaning 00:00:25 that's at stake in the meaning crisis because it's not sort of just semantic meaning

      John explains how the 4 P's originated as a way to summarize and present in a palatable way of presenting the cognitive science “4E” approach to cognition - that cognition does not occur solely in the head, but is also embodied, embedded, enacted, or extended by way of extra-cranial processes and structures.

    1. Branding specialists point to the practical benefits of what they call the ‘modern utility’ of sans serif typefaces. Cleaner and more legible, they are better suited to a variety of media and work particularly well online. The purity of these fonts allows the brands to be an empty vessel, ready to accommodate rapidly shifting trends.

      Given the number of platforms that these logos are made for, companies are optimising their logos and brand names to display in a clear fashion on every and every platform.

      The good: homogenizing logos emphasizes the words behind them. The bad: Everything looks the same! It's boring! Where is the expression of fashion design? Of design at all? What user testing told these people to look just like everyone else with a few letters' difference?

    1. China Police Database Was Left Open Online for Over a Year, Enabling Leak

      WSJ:中国警方的数据库对外网开放访问了一年多,导致泄密

      今年早些时候发现安全漏洞的网络安全专家说,这可能是历史上最大的个人数据盗窃案之一,也是中国最大的已知网络安全漏洞,因为一个常见的漏洞使数据在互联网上被公开。

      据网络安全专家称,上海警方的记录包括近10亿中国公民的姓名、政府身份证号码、电话号码和事件报告,是安全存储的。但他们说,一个用于管理和访问数据的仪表板被设置在一个公共网站上,而且没有密码,这使得任何具有相对基本技术知识的人都可以随意进入,复制或窃取信息库。

      暗网情报公司Shadowbyte的创始人Vinny Troia说:"他们将这么多数据暴露在外,这太疯狂了。"

      网络安全研究公司SecurityDiscovery的老板鲍勃-迪亚琴科(Bob Diachenko)表示,该数据库从2021年4月到上个月中旬一直暴露在外面,当时其数据突然被清除,取而代之的是一张赎金字条,上海警方发现了这一点。

      根据Diachenko先生提供的截图,"你的数据是安全的",赎金字条上写道。"联系你的数据......恢复10btc",意思是数据将以10个比特币,大约20万美元的价格被归还。

      这个赎金数额与一位匿名用户上周四在一个在线网络犯罪论坛上开始要求的价格相吻合,以换取对一个数据库的访问权,该用户声称该数据库包含从上海国家警察数据库盗取的数十亿条中国公民的信息记录。

      这个周末开始在社交媒体上流传的帖子引起了网络安全专家的警觉,不仅是因为泄漏的规模,还因为政府数据库中的信息的敏感性。

      上海市政府和中国网信办(该国的互联网监管机构)没有对评论请求作出回应。

      网络安全专家拼凑出了数据库真实性的新证据,以及这么多私人信息如何落入网络犯罪分子手中的细节。

      他们说,仪表盘就像一扇通向数据库的大门,即使在所有数据丢失后也没有关闭,直到这个漏洞开始得到公众的广泛关注。特罗亚先生说,窃取数据的人很可能是正在兜售数据的同一个实体。

      他说:"很常见的是,如果赎金受害者不支付赎金,那么他们会试图在网上出售数据。"

      无法确定该数据库被公开访问是偶然还是故意的,也许是为了在少数人之间更容易分享数据。特罗亚先生和迪亚琴科先生说,这种漏洞很常见,但他们都表示,发现如此规模的不安全数据库令他们感到震惊。

      两人都说,他们也证实了匿名泄密者的说法,即它包括超过23兆字节的数据,涵盖多达10亿人。他们说,一个名为 "person_address_label_info_master "的文件包含了人们的姓名、生日、地址、政府身份证和身份证照片,长度接近9.7亿行,这表明它包括同样多的人的详细信息,假设没有重复的条目。

      该文件标记了有犯罪史的个人,包括有交通违规行为的人、被认为是逃犯的人以及被指控犯有强奸或杀人罪的人。它还包括一个 "应该被密切关注的人 "的标签,这是中国政府监控系统中经常使用的一种称呼,以表示被认为对社会秩序构成威胁的人。

      数据泄露突出了一些政策研究人员所描述的中国信息安全方法中的核心矛盾。

      —— 华尔街日报 (https://www.wsj.com/articles/china-police-database-was-left-open-online-for-over-a-year-enabling-leak-11657119903?mod=djemalertNEWS)

      China Police Database Was Left Open Online for Over a Year, Enabling Leak Cybersecurity experts say error allowed theft of records of nearly 1 billion people, leading to $200,000 ransom note By Karen HaoFollow in Hong Kong and Rachel LiangFollow in Singapore July 6, 2022 11:05 am ET PRINT TEXT What is likely one of history’s largest heists of personal data—and the largest known cybersecurity breach in China—occurred because of a common vulnerability that left the data open for the taking on the internet, say cybersecurity experts who discovered the security flaw earlier this year.

      The Shanghai police records—containing the names, government ID numbers, phone numbers and incident reports of nearly 1 billion Chinese citizens—were stored securely, according to the cybersecurity experts. But a dashboard for managing and accessing the data was set up on a public web address and left open without a password, which allowed anyone with relatively basic technical knowledge to waltz in and copy or steal the trove of information, they said.

      “That they would leave this much data exposed is insane,” said Vinny Troia, founder of dark web intelligence firm Shadowbyte, which scans the web for unsecured databases and found the Shanghai police database in January.

      Leaked Shanghai police records contained names, ID information and incident reports for nearly 1 billion Chinese citizens. PHOTO: ALEX PLAVEVSKI/SHUTTERSTOCK The database stayed exposed for more than a year, from April 2021 through the middle of last month, when its data was suddenly wiped clean and replaced with a ransom note for the Shanghai police to discover, according to Bob Diachenko, owner of the cybersecurity research firm SecurityDiscovery, which similarly found the database—and later the note—through its periodic web scans earlier this year.

      “your_data_is_safe,” the ransom note read, according to screenshots provided by Mr. Diachenko. “contact_for_your_data…recovery10btc,” meaning the data would be returned for 10 bitcoin, roughly $200,000.

      The ransom amount matches the price that an anonymous user began asking for last Thursday on an online cybercrime forum in exchange for access to a database the user claimed contained billions of records of Chinese citizens’ information stolen from a Shanghai national police database.

      The post, which began circulating on social media over the weekend, alarmed cybersecurity experts not just for the leak’s size but also because of the sensitivity of the information contained in the government database.

      The Shanghai government and the Cyberspace Administration of China, the country’s internet regulator, didn’t respond to requests for comment.

      Cybersecurity experts have pieced together new evidence of the database’s authenticity and details of how so much private information could have fallen in the hands of cybercriminals.

      The dashboard acted like an open door to the data vault, they say, which wasn’t closed—even after all the data went missing—until the vulnerability began gaining widespread public attention. Whoever stole the data is likely the same entity that is peddling it, according to Mr. Troia.

      “What’s pretty common is if the ransom victim doesn’t pay the ransom, then they’ll try to sell the data off online,” he said.

      It couldn’t be determined whether the database was made publicly accessible by accident or on purpose, perhaps to share the data more easily among a few people. Such vulnerabilities are common, Mr. Troia and Mr. Diachenko said, though both said they were shocked to find an unsecured database of this size.

      Both said they also corroborated the anonymous leaker’s claims that it includes over 23 terabytes of data covering as many as a billion individuals. One file named “person_address_label_info_master”—which contains people’s names, birthdays, addresses, government IDs and ID photos—runs close to 970 million rows long, they said, which suggests it includes details on just as many people, assuming no duplicate entries.

      That file marks individuals who have a criminal history, and includes people with traffic violations, those considered fugitives and those who have been accused of rape or homicide. It also includes a label for “people who should be closely monitored,” a designation often used in China’s government surveillance systems to denote people seen as posing a threat to social order.

      One leaked Shanghai police file includes records ranging from traffic violations to murder accusations. PHOTO: ALEX PLAVEVSKI/SHUTTERSTOCK The data leak highlights what some policy researchers have described as the central contradiction in China’s approach to information security.

      In recent years, Beijing has signaled that data security and privacy are a priority, passing a series of laws and regulations designed to restrict commercial collection of sensitive data, including personal information, and keep it within the country’s borders. At the same time, the government has itself continued to collect vast amounts of data through a nationwide digital surveillance apparatus to exert tighter control over Chinese society.

      That the information was leaked from a government agency—and now has an unknown number of copies circulating outside of the country’s borders—could undermine Beijing’s argument that such a system protects national security, some China tech-policy experts say

      “It’s unclear who holds who accountable,” Kendra Schaefer, the head of tech-policy research at Trivium China, a Beijing-based strategic advisory consulting firm, wrote on Twitter in response to the leak. She said it is typically the Ministry of Public Security, which oversees local police agencies such as the Shanghai police, that is responsible for cybercrime investigations.

      The Chinese government hasn’t commented on the data leak, and references to it on Chinese social media are quickly being scrubbed.

      Some Chinese-speaking users of Twitter, including the chief executive of cryptocurrency exchange Binance, speculated that the leak stemmed from a 2020 technical blog post published by a user on CSDN, a Chinese developer forum similar to Github, that appeared to inadvertently include the access credentials to a Shanghai police server.

      Mr. Troia and Mr. Diachenko said the database, based on its configuration, in fact didn’t need access credentials at all, making that theory unlikely. The fault was with the person who set up the dashboard, they said.

      Write to Karen Hao at karen.hao@wsj.com and Rachel Liang at rachel.liang@wsj.com

      Copyright ©2022 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8 Appeared in the July 7, 2022, print edition as 'Police Database in China Was Open Online for Over a Year.'

      中國警察數據庫在線開放一年多,導致洩密 網絡安全專家表示,錯誤導致近 10 億人的記錄被盜,導致 20 萬美元的贖金票據 經過 Karen Hao跟隨 在香港和 梁瑞秋跟隨 在新加坡 美國東部時間 2022 年 7 月 6 日上午 11:05 打印 文本 今年早些時候發現該安全漏洞的網絡安全專家表示,這可能是歷史上最大的個人數據盜竊案之一——也是中國已知最大的網絡安全漏洞——因為一個常見的漏洞導致數據可以在互聯網上被盜取.

      據網絡安全專家稱,上海警方的記錄——包含近 10 億中國公民的姓名、政府身份證號碼、電話號碼和事件報告——被安全存儲。但他們表示,用於管理和訪問數據的儀表板是在公共網址上設置的,並且在沒有密碼的情況下保持打開狀態,這使得任何具有相對基本技術知識的人都可以進入並複製或竊取大量信息。

      暗網情報公司 Shadowbyte 的創始人 Vinny Troia 說:“他們會讓這麼多數據暴露在外,這簡直是瘋了。”該公司在網絡上掃描不安全的數據庫,並在 1 月份找到了上海警方的數據庫。

      洩露的上海警方記錄包含近 10 億中國公民的姓名、身份證信息和事件報告。 照片: 亞歷克斯·普拉韋夫斯基/SHUTTERSTOCK 網絡安全研究機構的負責人鮑勃·迪亞琴科(Bob Diachenko)表示,從 2021 年 4 月到上個月中旬,該數據庫暴露了一年多,當時它的數據突然被清除乾淨,取而代之的是一張贖金單,供上海警方發現。公司 SecurityDiscovery,該公司今年早些時候通過定期網絡掃描同樣發現了數據庫,後來發現了筆記。

      根據迪亞琴科先​​生提供的屏幕截圖,勒索信上寫著“your_data_is_safe”。“contact_for_your_data...recovery10btc”,這意味著數據將返回 10 個比特幣,大約 200,000 美元。

      贖金金額與一名匿名用戶上週四開始在在線網絡犯罪論壇上索要的價格相匹配,以換取訪問該用戶聲稱包含從上海國家警察數據庫竊取的數十億中國公民信息記錄的數據庫。

      該帖子於週末開始在社交媒體上流傳,令網絡安全專家感到震驚,不僅因為洩密的規模,還因為政府數據庫中包含的信息的敏感性。

      上海市政府和中國互聯網監管機構中國國家互聯網信息辦公室沒有回應置評請求。

      網絡安全專家拼湊了數據庫真實性的新證據,以及如此多私人信息可能落入網絡犯罪分子手中的細節。

      他們說,儀表板就像一扇通往數據保險庫的大門,即使在所有數據都丟失之後,它也沒有關閉,直到該漏洞開始引起公眾的廣泛關注。根據 Troia 先生的說法,竊取數據的人很可能是兜售數據的同一實體。

      “很常見的是,如果贖金受害者不支付贖金,那麼他們會嘗試在網上出售數據,”他說。

      無法確定該數據庫是偶然還是故意公開訪問的,也許是為了更容易在少數人之間共享數據。此類漏洞很常見,Troia 先生和 Diachenko 先生說,儘管他們都表示他們對發現如此規模的不安全數據庫感到震驚。

      兩人都表示,他們還證實了匿名洩密者的說法,即其中包含超過 23 TB 的數據,涵蓋多達 10 億人。他們說,一個名為“person_address_label_info_master”的文件——其中包含人們的姓名、生日、地址、政府身份證和身份證照片——運行近 9.7 億行,這表明它包含了同樣多的人的詳細信息,假設沒有重複條目。

      該文件標記有犯罪記錄的個人,包括違反交通規則的人、被視為逃犯的人和被指控犯有強姦或殺人罪的人。它還包括一個“應該被密切監視的人”的標籤,這個名稱經常在中國政府監控系統中用於表示被視為對社會秩序構成威脅的人。

      一份洩露的上海警方檔案包括從交通違規到謀殺指控的記錄。 照片: 亞歷克斯·普拉韋夫斯基/SHUTTERSTOCK 數據洩露凸顯了一些政策研究人員所描述的中國信息安全方法的核心矛盾。

      近年來,北京已發出信號,將數據安全和隱私放在首位,通過了一系列旨在限製商業收集包括個人信息在內的敏感數據並將其保存在中國境內的法律法規。與此同時,政府自身也在繼續通過全國性的數字監控設備收集大量數據,以加強對中國社會的控制。

      一些中國技術政策專家說,這些信息是從一個政府機構洩露的——現在在中國境外流通的副本數量不詳——可能會削弱北京關於這種系統保護國家安全的論點

      “目前尚不清楚誰對誰負責,”總部位於北京的戰略諮詢公司 Trivium China 的技術政策研究負責人 Kendra Schaefer 在 Twitter 上回應洩密事件時寫道。她說,負責網絡犯罪調查的通常是公安部,該部負責監督上海警方等地方警察機構。

      中國政府尚未對數據洩露發表評論,中國社交媒體上對此事的提及也很快被刪除。

      包括加密貨幣交易所 Binance 首席執行官在內的一些 Twitter 中文用戶推測,洩漏源於 2020 年用戶在類似於 Github 的中國開發者論壇 CSDN 上發布的一篇技術博客文章,該文章似乎無意中包含了訪問權限上海警方服務器的憑據。

      Troia 先生和 Diachenko 先生說,根據其配置,該數據庫實際上根本不需要訪問憑證,因此這種理論不太可能。他們說,問題出在設置儀表板的人身上。

      寫信給Karen Hao karen.hao@wsj.com和Rachel Liang rachel.liang@wsj.com

      版權所有 ©2022 道瓊斯公司。保留所有權利。87990cbe856818d5eddac44c7b1cdeb8 出現在 2022 年 7 月 7 日的印刷版中,名稱為“中國警察數據庫在線開放一年多”。

    1. Dogen can be very difficult to read or understand. That’s why we often need a commentary or teacher to introduce his way of writing and the underlying teaching. I often say he’s a thirteenth century cubist. Just like Picasso or in the writing world, Gertrude Stein, he tries to show all sides of the story in one paragraph or even one sentence. That is why he repeats himself and contradicts himself all in the same paragraph. If you are looking for the “right” understanding, you become confused and lost in his prism of various interpretations or views. Dogen’s “right” understanding is that there is none.   No one point of view is “right”. According to conditions, any view can be the right view in the right circumstance. Dogen really wants to take away our solid idea of a fixed ground of reality. It is not form or emptiness. It is not both or neither. There is no one right, fixed view. That is our “clinging”.

      Dogen contradicts himself because he tries to show "all sides of the story". His teaching is a "pointing out" instruction that ANY viewpoint is simply that, perspectival knowing.

      An important question then, is this, if Dogen (and Nagarjuna) are claiming that there is no objective reality in our constructed world of concepts and language, is science being denied? Is fake news ok? Is this a position that basically accepts post modernism? No, I would say no to all of these. It's pointing out the LIMITATIONS of concepts and language. They are incomplete and always leave with a sense of wanting more. And since Post Modernism is also one point of view, it is also thrown out by Dogen and Nagarjuna. Remember, ALL points of views are points of view. Fake news is also a point of view so those who practice it can also not justify it.

      What Dogen and Nagarjuna are saying is that as soon as one enters the world of concepts and language, any concept and anything side is inherently one sided. It is inherently perspectival and situated in an inherently incomplete conceptual space.

      As Tibetan doctor/monk Barry Kerzin points out in this conversation with physicist Carlo Rovelli, there is a critical difference between "existence" and "intrinsic existence". The first is not being denied by Nagarjuna, but the second, intrinsic existence, the existence of concepts and the words that represent them, is. If these two are confused, it can lead straight to nihilism.

      https://hyp.is/go?url=http%3A%2F%2Fdocdrop.org%2Fvideo%2FsPSMTNjwHZw%2F&group=world

      This also aligns with John Vervaeke's perspectival and propositional knowing in his 4 P ways of knowing about reality: Propositional, Perspectival, Participatory and Procedural. A good explanation of Vervaeke's 4Ps is here: https://hyp.is/go?url=http%3A%2F%2Fdocdrop.org%2Fvideo%2FGyx5tyFttfA%2F&group=world

    1. okay so imagine a person learns a lot about  hammers and becomes very skillful with using one   00:02:38 this is useful but is every problem a nail  we might need another kind of knowing that   sees the situation we find ourselves in  and knows what skills are appropriate   John Vervaeke calls this perspectival knowing  it's not about knowing the truth or knowing how to   enact something it's knowing how to perceive the  world how to take it all in from our perspective   and with this we will know how things feel  and we can appreciate then what really matters   00:03:04 or sense what is needed in the situation it's not  just thinking about our perspective but really   inhabiting it getting a taste for how things  appear to us the result of perspectival knowing is   presence and with presence we also appreciate  the limits of our perspective and the value of   other perspectives we become more aware we see how  things fit together we put things in perspective   we become perceptive

      Third P: Perspectival knowing....being aware of our situatedness, can empathize, put ourselves in other shoes.

    1. For Huh, when he is working, there’s something almost subconscious going on. In fact, he usually can’t trace how or when his ideas come to him. He doesn’t have sudden flashes of insight. Instead, “at some point, you just realize, oh, I know this,” he said. Maybe last week, he didn’t understand something, but now, without any additional input, the pieces have clicked into place without his realizing it. He likens it to the way your mind can surprise you and create unexpected connections when you’re dreaming. “It’s just amazing what human minds are capable of,” he said. “And it’s nice to admit that we don’t know what’s going on.”

      This sounds very similar to something I've noticed within myself. I can struggle with something, then when I come back to it a day or two later, I just know.

    1. the only remaining dispute is whether, given these facts, we should baptize the action of paying in the city with the word “rational,” or if we should instead call it “an irrational action, but one that follows from a disposition it’s rational to cultivate

      Doesn't this just depend on whether paying in the city is a juncture at which we apply free will, in the same way that when free will applies is central to Newcomb's problem?

    1. He can add marginal notes and comments, taking advantage of one possible type of dry photography, and it could even be arranged so that he can do this by a stylus scheme, such as is now employed in the telautograph seen in railroad waiting rooms, just as though he had the physical page before him.

      We have gotten away from written annotations for digital work and I'm not entirely sure it's a good thing. I want to think through the trade-offs of this.

    1. https://www.youtube.com/watch?v=7s4xx_muNcs

      Don't recommend unless you have 100 hours to follow up on everything here that goes beyond the surface.

      Be aware that this is a gateway for what I'm sure is a relatively sophisticated sales funnel.


      Motivational and a great start, but I wonder how many followed up on these techniques and methods, internalized them and used them every day? I've not read his book, but I suspect it's got the usual mnemonic methods that go back millennia. And yet, these things are still not commonplace. People just don't seem to want to put in the work.

      As a result, they become a sales tool with a get rich quick (get smart quick) hook/scheme. Great for Kwik's pocketbook, but what about actual outcomes for the hundreds who attended or the 34.6k people who've watched this video so far?

      These methods need to be instilled in youth as it's rare for adults to bother.


      Acronyms for remembering things are alright, but not incredibly effective as most people will have issues remembering the acronym itself much less what the letters stand for.


      There seems to be an over-fondness for acronyms for people selling systems like this. (See also Tiago Forte as another example.)

    1. to do our own work to develop our own teams to 00:13:48 grow our own networks so based on that we decided to organize a movement to build these kinds of new models to arrive at much more sustainable public goods funding not just sustainable ideally regenerative 00:14:01 systems with possible externalities they're not just sustaining themselves at some level but actually creating a lot more value around themselves and we hope to also create structures for much better value alignment within these networks 00:14:13 so we decided to throw an event uh last year uh so it's less than a year ago um there's probably a number of other people that helped put this on if um uh i in my memory yesterday i remembered a set of folks who are here uh which i 00:14:26 want to thank for for driving this and really creating this this event but it really takes a village to put this on especially the pl events team um uh and many others who have helped uh and since then we've now had uh three 00:14:37 events two virtual one and one in person and we're scaling the community in the size of the conversations the um systems that we're reviewing the mechanisms that we're exploring the studies that we're doing and so on uh so 00:14:50 in this conference we've gone from you know 11 18 talks and now 56 really encourage you to like attend all of them simultaneously of course you can do that of course you can later in time they're all recorded 00:15:02 and we're also very fortunate to be working with a whole bunch of other folks in the ecosystem building out the broader public goods movement in the blockchain space great uh 00:15:15 thanks to the github community and shelling point and many other manila groups that are very focused on building regenerative structures so all of this leaves me uh very hopeful um you know our impact so far has been 00:15:28 to explore a set of funding mechanisms here's a few uh that i pulled from the youtube uh channel a bunch of these mechanisms are explained explaining explored and so on some of them also have kind of experimental review still early days so 00:15:41 a lot of it is still kind of not very systematic not very well experimented upon and so on but i'd love to kind of crank that up and get to drastically better study to the point where we can like analyze these systems with the same 00:15:53 level of rigor that we analyze things like network protocols or like hardware devices and things like that we've also [Music] sort of revived the impact certificates um 00:16:05 idea and and field we've um gotten to explore a number of novel entity types i know that a few of these are actually getting booted up now which is really awesome impact for just a few months of talking about things um and we've 00:16:19 created some uh we've talked about some coordination systems that could be um extremely useful i think this is a very promising area but probably under under um understudied and an area that that is 00:16:31 maybe harder or seems um diffic much more difficult to get traction on so it doesn't get studied as much

      Funding the Commons Event

    2. just to give you a feel for how powerful these systems are just think of the bitcoin energy consumption and realize that that 00:09:48 just drops out of two components in bitcoin one is the block reward impact evaluator and two the price of bitcoin so those two things yield this tremendous energy 00:10:00 consum consuming system this was kind of an accident this was a an accident of nobody quite intended this this device to um consume this this amount of energy and waste this amount of energy uh but 00:10:13 this gives you a sense of the power of these these uh systems first off we should fix this and you know get out get to uh better systems that that actually uh make this this um energy use uh useful 00:10:25 uh but this i use as an example to give you a sense of like the level of power that comes from these incentive structures and their operation at scale in falcon we're very familiar with these kinds of structures we use the same component and we've gotten a feel for how powerful 00:10:38 this stuff is um in just a couple of years we ended up organizing the build out of a massive hardware infrastructure for providing storage to the world um with again just using one 00:10:51 core incentive structure a block reward uh so all of this makes me really really hopeful um that we'll be able to build um these kinds of incentive structures that can scale to solve extremely large planetary scale 00:11:03 problems um by designing incentive structures and structures warping the incentive fields and getting us to little by little problem by problem scale by scale um solve challenges 00:11:17 and so i think i greatly encourage you if you aren't already in this uh world to try it out to try creating some smart contracts and deploying them um to try uh working with other projects and so on 00:11:29 to get a feel for how powerful these these systems are um i i'm very hopeful that things like this will have a huge impact on planetary scale problems like uh climate change um i've become very hopeful that 00:11:41 these systems will let us coordinate massive action again millions of people billions of people whole industries by letting us have the full power of law and economics and so on in a fully 00:11:55 programmable environment i'm also very hopeful that we can get to accelerate science and technology development by using these kinds of structures to create instruments to incentivize areas of the innovation chasm that are 00:12:08 underserved areas where it's extremely difficult to get funding for certain projects or where it's extremely difficult to get long-term rewards or long-term success many of you have probably heard me talk 00:12:21 about this science and technology translation problem and the lack of incentive structures in that in that period in the castle in the middle and i think a lot of that just comes from the lack of reward structures there that make it impossible for 00:12:34 groups building groups building building projects there to raise capital um because there's no good incentive for capital uh to to deploy there so uh what brought us to so knowing all 00:12:46 of this knowing that this is a critical century knowing that um this critical decade and year um and knowing that crypticon is extremely powerful um why are we here why are we in funding commons so we thought about this problem last year and 00:12:59 we saw that the scale of problem of um of blockchains and the kind of rapid pace of development in industry and the emergence of things like defy and dials and nfts and so on 00:13:10 and especially the the broad adoption by hundreds of thousands of people or millions of people of these tools gave us a very promising um landscape to be able to solve these kinds of problems 00:13:23 and so we have the potential to solve all these massive coordination problems but we're lacking good mechanisms we need way better governance structures we need way better funding mechanisms and uh and so on we need to study these things with much 00:13:36 deeper theory and much deeper experimental analysis and so on

      Bitcoin, in spite of its unintended consequences, does demonstrate the power and potential of these kinds of systems to scale.

    1. Note: This rebuttal was posted by the corresponding author to Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      Manuscript number: RC-2021-01016

      Corresponding author(s): Dennis Klug

      1. General Statements [optional]

      Dear editor, dear reviewers,

      thank you very much for the quick review of our manuscript as well as for the constructive criticism and the interesting discussion of our results. Reading the comments, we realized that we may have put too much emphasis on the in vivo microscopy of sporozoites and their interaction with the salivary gland. We believe that the generated mosquito lines can be used to address different scientific questions, the in vivo microscopy of host-pathogen interactions being only one of them. Because of this imbalance, and to address some of the reviewers' comments, we have partially rewritten the manuscript (particularly the introduction). At the same time, we have implemented additional data on the inducibility of the promoters used, as well as on the functionality of hGrx1-roGFP2 in the salivary glands. Furthermore, we created an additional figure to better present the expression patterns of trio and saglin promoters within the median lobe, and we expanded the section on in vivo microscopy of sporozoites. We hope that these results further highlight the significance of our study. Accordingly, we have also changed the title of the manuscript to „A toolbox of engineered mosquito lines to study salivary gland biology and malaria transmission” to indicate the broad applicability of the generated mosquito lines and we have included an additional co-author, Raquel Mela-Lopez, who conducted the redox analysis. We hope that these changes will adequately answer the questions of the reviewers and address any concerns they may have had. We look forward to hearing from you.

      With our kind regards,

      Dennis Klug

      Katharina Arnold

      Raquel Mela-Lopez

      Eric Marois

      Stéphanie Blandin

      2. Point-by-point description of the revisions

      Reviewer #1 (Evidence, reproducibility and clarity (Required)):

      **Summary**

      This manuscript reports the generation and characterization of transgenic lines in the African malaria mosquito Anopheles coluzzii that express fluorescent proteins in the salivary glands, and their potential use for in vivo imaging of Plasmodium sporozoites. The authors tested three salivary gland-specific promoters from the genes encoding anopheline antiplatelet protein (AAPP), the triple functional domain protein (TRIO) and saglin (SAG), to drive expression of DsRed and roGFP2 fluorescent reporters. The authors also generated a SAG knockout line where SAG open reading frame was replaced by GFP. The reporter expression pattern revealed lobe-specific activity of the promoters within the salivary glands, restricted either to the distal lobes (aapp) or the middle lobe (trio and sag). One of the lines, expressing hGrx1-roGFP2 under control of aapp promoter, displayed abnormal morphology of the salivary glands, while other lines looked normal. The data show that expression of fluorescent reporters does not impair Plasmodium berghei development in the mosquito, with oocyst densities and salivary gland sporozoite numbers not different from wild type mosquitoes. Salivary gland reporter lines were crossed with a pigmentation deficient yellow(-) mosquito line to provide proof of concept of in vivo imaging of GFP-expressing P. berghei sporozoites in live infected mosquitoes.

      **Major comments**

      Overall the manuscript is very well written with a clear narrative. The data are very well presented. The generation of the transgenic mosquito lines is elegant and state-of-the art, and the new reporter lines are thoroughly characterized.

      This is a nice piece of work that is suitable for publication, although the in vivo imaging of sporozoites is somewhat preliminary and would benefit from additional experiments to increase the study impact.

      We would like to thank the reviewer for his/her appreciation of our manuscript. In the revised version, we have included additional experiments on in vivo imaging of sporozoites, which allowed us to quantify moving and non-moving sporozoites imaged under the cuticle of live mosquitoes. Although this is still a proof of concept, we believe that these new data provide novel interesting data and will better illustrate potential applications.

      The reporter mosquito lines express fluorescent salivary gland lobes, yet the authors only provide imaging of parasites outside the glands. It would be relevant to provide images of the parasite inside the fluorescent glands.

      We have now included images showing sporozoites inside the salivary glands in vivo in Figure 8C and discuss possible ways to further improve resolution and efficiency of the imaging procedure in lines 563-586.

      The advantage of the pigmentation-deficient line over simple reporter lines is not clear, essentially due to the background GFP fluorescent in figure 5C. Imaging of GFP-expressing parasites should be performed in mosquitoes after excision of the GFP cassette under control of the 3xP3 promoter. This would probably allow to document the value of the reporter lines more convincingly.

      Indeed, by incorporating two Lox sites in the transgenesis cassette, we designed the yellow(-)KI line to permit removal of the fluorescent cassette and completely exclude expression of the transgenesis reporter EGFP. Still, EGFP expression in the yellow(-)KI adults is restricted to the eye and ovary, as we show now in Figure 7 supplement 1D. In contrast, no EGFP fluorescence was observed in the thorax area (Figure 7 supplement 1D). Therefore, we believe that the benefit of removing the fluorescence cassette for this study is limited. Moreover, the generation of such a line would take at least 3-4 months before experiments could be performed. Nevertheless, we agree with the reviewer that removal of the fluorescence cassette would be instrumental for follow-up studies. To draw the reader's attention to this issue, we now discuss background fluorescence in lines 378-387.

      Along the same line, it is unclear if the DsRed spillover signal in the GFP channel is inherent to the high expression level or to a non-optimal microscope setting. This is a limitation for the use of the reporter lines to image GFP-expressing parasites.

      We have discussed this problem with the head of the imaging platform at our institute, and we believe that it is not a problem that occurs due to incorrect settings. Rather, it seems to be due to the significant expression differences of the two fluorescence reporters used. We agree with the reviewer that this is a limitation and discuss the problem now in lines 416-412 and 565-567.

      The authors should fully exploit the SAG(-) line, which is knockout for saglin and provides a unique opportunity to determine the role of this protein during invasion of the salivary glands. This would considerably augment the impact of the study. In this regard, line 131 and Fig S3E: why is there persistence of a PCR band for non-excised in the sag(-)EX DNA?

      We definitely share the reviewer's enthusiasm about saglin and its role in parasite development in mosquitoes. We have thoroughly characterized the phenotype of sag(-) lines with respect to fitness and Plasmodium infection. These results are described in a spearate manuscript currently in peer review and available as a preprint on bioRxiv (https://doi.org/10.1101/2022.04.25.489337). Furthermore, in the revised manuscript, we have included additional data on the transcriptional activity of the saglin promoter with respect to the onset of expression and blood meal inducibility (Figure 2). In addition, we have included a completely new Figure 3 to highlight the spatial differences in transcriptional activity of the saglin promoter compared with the trio promoter. These new data are commented in lines 206-276.

      There might be a misunderstanding in the interpretation of the genotyping PCR. The PCR shown in Figure 1 – figure supplement 3, displays PCR products for different genomic DNAs (sag(-)EX, sag(-)KI and wild type) using the same primer pair. „Excised“ refers to sag(-)EX while „non excised“ refers to sag(-)KI and „control“ to wild type. Primers were chosen in a way to yield a PCR product as long as the transgene has integrated, only the shift in size between „excised“ and „non excised“ indicates the loss of the 3xP3-lox fragment. We have now changed the labeling of the respective gel in Figure 1 – figure supplement 3 to make this clearer.

      Did the authors search for alternative integration of the construct to explain the trioDsRed variability?

      We validated trio-DsRed cassette insertion in the X1 locus by PCR. The only way to rule out an additional integration of the transgene would be whole genome sequencing, which we did not perform. Still, we believe that the observed expression patterns are due to locus-specific effects of the X1 locus. Indeed, several lines of evidence point in this direction: (1) transgenesis was realized using the phage Φ31 integrase that promotes site-specific integration (attP is 38bp long and very unlikely to occur as such in the mosquito genome) and for which we never detected insertion in other sites in the genome for other constructs inserted in X1 and other docking lines; (2) additional unlinked insertions would have been easily detected during the first backcrosses to WT mosquitoes we perform in order to isolate the transgenic line and homozygotise it; (3) we have often observed variegated expression patterns for other transgenes located in the X1 locus in the past, leading us to believe that this locus is subjected to variegation influencing the expression of the inserted promoters. Usually, the variation we observe is simpler (e.g. strong and weak expression of the fluorescent reporter placed under the control of the 3xP3 promoter in the same tissues where it is normally expressed), but some promoters are more sensitive to nearby genomic environment than others, which we believe is the case for trio. Finally, should there be additional insertions of the transgenesis cassette in the genome, they should all be linked to the X1 locus as we would otherwise have detected them in the first crosses as mentioned above, which is unlikely. Thus, although very unlikely, we cannot exclude a single additional and linked insertion possibly explaining the high/low DsRed patterns, but variegation would still be required to explain other patterns. We have mentioned this alternative explanation in the manuscript in lines 522-524.

      Line 254-255. Does the abnormal morphology of SG from aapp-hGrx1-roGFP2 result in reduced sporozoite transmission?

      This is an interesting question. For future experiments, it could indeed be important to test if the transmission of sporozoites by the generated salivary gland reporter lines is not impaired. However, the quantification of the number of sporozoites in aapp-hGrx1-roGFP2 expressing salivary glands did not reveal any significant differences from the wild type (Figure 5 – figure supplement 1B) and would definitely be sufficient to infect mice. As we have no evidence for reduced invasion of sporozoites in the salivary glands of aapp-hGrx1-roGFP2 and of the DsRed reporter lines, no good reason to believe that the expression of fluorescent proteins would interfere with parasite transmission, and as we produced these lines as tools to follow sporozoite interaction with salivary glands, we have not performed transmission experiments.

      Of note, we have now included images of highly infected salivary glands of all reporter lines in Figure 5 – figure supplement 1D to confirm that expression of the respective fluorescence reporter does not interfere with sporozoite invasion. Also we have not observed that sporozoites do not invade salivary gland areas displaying high levels of hGrx1-roGFP2.

      **Minor comments**

      -Line 51: sporogony rather than schizogony

      Schizogony was replaced with sporogony.

      -Line 56: sporozoites are not really deformable as they keep their shape during motility

      This sentence was removed.

      -In the result section, it is not clearly explained where constructs were integrated.

      We have now included the sentence „...with an attP site on chromosome 2L...“ (line 173) and the respective reference (PMID: 25869647) to give more information about the integration site.

      Line 106 and 434-435: for the non-expert reader, it is not clear what X1 refers to, strain or locus for integration?

      X1 refers to both, the locus and the docking line. We have rephrased the beginning of the result section (previously line 106) to give more information about the integration site as mentioned above.

      -Line 112-115: the rational of integrating GFP instead of SAG is not clearly explained here, but become clearer in the discussion (line

      We have slightly rephrased the sentence to better explain the reasoning for this procedure (lines 182-184).

      -Line 140: FigS2A instead of S3A

      This mistake was corrected in the revised manuscript.

      -Perhaps mention that GFP reporters (SG) might be useful to image RFP-expressing parasites.

      We have now included an image of the aapp-hGrx1-roGFP2 line infected with a mCherry expressing P. berghei strain in Fig. 7D.

      -Line 236: the authors cannot exclude integration of an additional copy (as mentioned in the discussion line 367-368).

      As discussed above, we removed „..as a single copy...“ and introduced the possibility of an additional integration linked to X1 (lines 522-524).

      -Line 257-258. The title of this section should be modified as SG invasion was not captured.

      The title was rephrased. It reads now „Salivary gland reporter lines as a tool to investigate sporozoite interactions with salivary glands” (line 356-357).

      -Line 287: remove "considerable number" since there is no quantification.

      This was removed. In addition, we included new data in this section of the manuscript and rephrased the results accordingly (lines 406-427).

      -Line 400-402: Klug and Frischknecht have shown that motility precedes egress from oocysts (PMID 28115054), so the statement should be modified.

      Thank you for this suggestion. The passage was modified accordingly.

      -Line 404: remove "significant number" since there is no quantification.

      This section was rephrased and the phrase "significant number" was removed (lines 406-427).

      -Line 497: typo "transgenesis"

      The typo was correct in the revised manuscript.

      -FigS1: add sag-DsRed in the title

      Thank you for spotting this inconsistency, we corrected this mistake (line 1134).

      -Stats: Mann Whitney is adequate for analysis in fig 2C but not 2B, where ANOVA should be used (more than 2 groups).

      We have performed now an one-way-ANOVA test and adapted figure and figure legend accordingly.

      Reviewer #1 (Significance (Required)):

      This work describes a technical advance that will mainly benefit researchers interested in vector-Plasmodium interactions. Invasion of salivary glands by Plasmodium sporozoites is an essential step for transmission of the malaria parasite, yet remains poorly understood as it is not easily accessible to experimentation. The development of transgenic mosquitoes expressing fluorescent salivary glands and with decreased pigmentation provides novel tools to allow for the first time in vivo imaging in live mosquitos of the interactions between sporozoites and salivary glands.

      Reviewer's expertise: malaria, Plasmodium berghei, genetic manipulation, host-parasite interactions

      Reviewer #2 (Evidence, reproducibility and clarity (Required)):

      The first achievements of the Klug et al. study are the (i) genetical engineering of the Anopheles coluzzii mosquitoes reared in insectarium, that stably express distinct fluorescent reporters (DsRed and hGrx1-roGFP2 and EGFP) under the putative "promoters" of genes reported to encode proteins expressed differentially in the pluri-lobal salivary glands(Sg) of anthropophilic blood-feeding adult females, (ii) the analysis of the promoter activity - based on the selected fluorescent reporter - with a primary focus on the salivary gland/Sg (including at the Sg lobe level) of the adult female but also considering the preimaginal developmental time with larvae and pupa samples. Of note, some data confirm the already reported time-dependent and blood meal-dependent promoter activity for the related Anopheles species. The last part presents preliminary dataset on live imaging of Plasmodium berghei sporozoites with the aim of highlighting the usefulness of these A. coluzzii transgenic

      lines to better understand how the rodent Plasmodium sporozoites first colonize and then settle as packed cells in Sg acinar host cells.

      **Major comments**

      The two first objectives presented by the authors have been convincingly achieved with (i) the challenging production of four different lines expressing different single or double reporters chosen by the authors (and appropriately presented in the result text and figure sections), (ii) the careful analysis of the spatiotemporal expression of the DsRed reporter under two "promoters" studied and with regards to the blood feeding event parameter. However, if the reason why the authors have put so much effort in the production of their transgenic mosquitoes is (and as mentioned) to provide a significant improved setting enabling the behavioral analysis of sporozoites upon colonization and survival in the Sg, it seems this part is kind of limited. Likely in relation with this perception is the fact I found the introductory section often confusing and not enough direct to the points: in particular distinguishing the rationale from the necessity to produce appropriate models, and clarifying what is/are the added value(s) offered by these new transgenic lines models when compared to what exist (in Anopheles stephensi) with specific evidence that argue for this knowledge gain. At this stage, it is unfortunately not clear to me, what is the bonus of imaging the Plasmodium fluorescent sporozoites in hosts with fluorescent salivary gland lobes if one can not monitor key events of the Sg-sporozoite interaction that were not reachable without the fluorescent mosquito lines. Furthermore, it should be better explained why the rodent Plasmodium species has been chosen rather Plasmodium falciparum (or other human species) for which A. coluzzii is a natural host; may be just mentioning that this study would serve as a proof of concept but bringing real biological insights would be fine.

      We would like to thank the reviewer for his/her evaluation of our manuscript, which has helped us clarify our manuscript on several points. Our goal here was a proof of concept demonstrating potential applications for the fluorescent salivary gland reporter lines and for the low pigmented yellow(-) line we generated. In vivo imaging of sporozoites in salivary glands is one possible application that we intended to use as proof-of-concept, but we tailored the manuscript too restrictively with this aim in mind and neglected other applications as well as characterization of the biology of salivary glands in general. To improve this, we have included further data on the blood inducibility of the promoters tested (Figure 2), the functionality of roGFP2 in the salivary glands (Figure 5), and the use of the generated lines in the examination and definition of expression patterns of salivary gland proteins in vivo (Figure 6). Accordingly, we have adjusted the entire manuscript to adequately describe all the results presented. We have also rephrased major parts of the abstract and the introduction to better describe the impact of salivary gland biology on the transmission of pathogens, and to explain the anatomy of salivary glands in more detail.

      We agree with the reviewer that it would be desirable to show direct salivary gland-sporozoite interactions in vivo. Still we believe that having mosquito lines expressing a fluorescent marker in the salivary gland as well as weakly pigmented mosquitoes are a first step to make this visualization possible, although we cannot provide a lot of quantitative data about this interaction yet.

      1- The three genes and gene products selected by the authors should definitively be more systematically explained, which means for example the authors need to introduce the different mosquito species and the parasite-mosquito host pairs they are then referring to for the promoter/encoded proteins of their interest. In the same vein, I did not find any information as to the choice of the mosquito species (A. Coluzzii) for the current work. I was curious to know what is the advantage since better knowledge was available with Anopheles stephensi with respect to (i) Saglin and its promotor activity, (ii) aap driven dsRed expression (lines already existing) and (iii) sporozoite-gland interaction.

      We have largely reworded the introduction to clarify the rationale for selecting these three promoters while providing a better understanding of salivary gland biology in general.

      The choice of the mosquito species depends, in our opinion, strongly on the perspective and on the experiments to be performed. We agree with the reviewer that the malaria mosquito A. stephensi is a widely used model, based on its robustness in breeding and its high susceptibility to P. berghei and P. falciparum infections. However, in these cases, both vector-parasite pairs are to some extend artificial. Indeed, although it is also a vector of P. falciparum in some regions, A. stephensi mostly transmits P. vivax that cannot be cultured in vitro. Thus research efforts on this vector-parasite pair is limited. Also, due to the emerging number of observed differences between Anopheles species and their susceptibility to Plasmodium infection and transmission, more research has recently been conducted on African mosquito species. This effect is also reinforced by the fact that P. falciparum, unlike all other Plasmodium species infecting humans, causes the most deaths, making control strategies for species from the A. gambiae complex such as A. coluzzii particularly important. As a result, the number of available genetic tools in A. coluzzi/gambiae has overpaced A. stephensi. These include mosquito lines with germline-specific expression of Cas9 for site-directed transgenesis, lines expressing Cre for lox-mediated recombination, and several docking lines. Such tools are, as far as we know, not available in A. stephensi and were essential in reaching our objectives. Docking lines are of particular interest because they allow reliable integration into a characterized locus, which is an advantage over random transposon-mediated integration. Random insertion sites have generally not been characterized in the past, which can cause problems since integrations regularly occur in coding sequences. Docking lines also enable comparison of different transgenes as they are all integrated in the same genetic environment, which does not ensure some expression variation as illustrated in our manuscript. For all these reasons, we have thus chosen to work with A. coluzzii.

      Concerning the use of the murine malaria parasite P. berghei instead of the human one P. falciparum, there are two reasons that motivated our choice. (1) For in vivo imaging of sporozoites, we needed a parasite line that is strongly fluorescent at this stage, and there is no such line existing for P. falciparum. Actually, there is no fluorescent P. falciparum line able to efficiently infect A. coluzzii reported thus far, as reporter genes have all been inserted in the Pfs47 locus that is required by P. falciparum for A. coluzzii colonization. (2) Imaging P. falciparum infected mosquitoes, especially with sporozoites in their salivary glands, requires to have access to a confocal microscope in a biosafety level 3 laboratory. Hence our objective here was indeed to provide a proof of principle of in vivo imaging of sporozoites in the vicinity or inside salivary glands using our engineered mosquitoes, and to provide a first analysis of this process using P. berghei as a model of infection. Nevertheless, we agree with the reviewer that the goal should be to work as close as possible to the human pathogen.

      Despite the wide range of topics that this study touches on, we want to try and keep the manuscript as concise as possible. Therefore, we have not discussed the advantages and disadvantages of the different vector-parasite pairs and ask the reviewer to indulge us in this.

      2- To help clarifying the added value of the present study, introducing the species names of the mosquito and the Plasmodium that serve as a model would be appreciated.

      We have included now the name of the used Plasmodium species in line 361. At this position we also give now more details about the transgene this line is carrying. We mention the used mosquito species A. coluzzii now at different positions in the manuscript (e.g. lines 52, 162 and 177).

      3- Since a focus is the salivary gland of the blood feeding female Anopheles sp., a rapid description of the glands with different lobes and subdomains the results and figure 1 nicely refer to, would help in the introduction.

      We explain now the anatomy of female and male mosquito salivary glands in the introduction (lines 119-123). The different lobes are now also indicated in the salivary gland images shown in several figures including Figure 1.

      4- That description could logically introduce the few proteins actually identified with lobe specific or cell domain specific expression (apical versus basal side, intracellular or surface expose, vacuole, duct...) profiles. The context with regards to sporozoite biology would then easily validate the "promoter choice". As a minor remark, I miss the reason why the authors wrote " the astonishing degree of order of the structures (referring to the packing of sporozoites within the Sg acinars) raise the question whether sporozoite can recognize each other". Please clarify since packing/accumulation can be passive due to cell mechanical constraints and explain what this point has to see with the question and experimental work proposed here?)

      We thank you for this suggestion. We have reworded key parts of the introduction to make the reasons for using the three selected promoters clearer. We also mention now other proteins expressed in the salivary glands which have been characterized in more detail because of their effect on blood homeostasis (e.g. anticoagulants) (lines 136-139).

      The mention of stack formation of salivary gland sporozoites served only to clarify that almost nothing is known about the behavior of sporozoites within the salivary glands in vivo to explain why new methods are needed to make these processes visible. We have now reworded this passage to make this clearer, and we also mention that stack formation could also occur due to mechanical constraints, as suggested by the reviewer (lines 101-102, 106-110).

      5- The selection of hGrx1-roGFP2 is quite interesting and justified but there is then no use of this reporter property in the preliminary characterization of the Sg and Sg-sporozoite interaction. Could the authors provide such characterization?

      We have now implemented data testing the functionality of hGrx1-roGFP2 in the salivary glands. We also show qualitatively that the redox state of glutathione does not change upon infection with P. berghei sporozoites (Figure 6). We now describe and discuss these new data in lines 337-354.

      6- Figure 1: it would be nice to add in the legend at what time the dissection/imaging has been made (age, blood feeding timing?). I would also omit the double mutant trio-Dsred/aapDsred in the main figure (may be supplemental) since the two single mutants Dsred separately together with the double mutant (with different fluorescence) already provide the information. I would suggest to regroup the phenotypic presentation of the transgenic line made in the KI mosquitoes (current figure 5) in the main figure 1.

      We have now added the missing information about the age of dissected mosquitoes and their feeding status in the legend of Figure 1. We also thank the reviewer for the suggestion to replace one image displaying aapp and trio promoter activity in trans-heterozygous mosquitoes with an image of the pigment deficient mutant yellow(-)KI. Still, due to the changes made to the manuscript based on the reviewers comments in general, we have now implemented new data highlighting the functionality of the generated salivary gland reporter lines investigating the redox state of glutathione as well as the expression pattern of the saglin and trio promoters at the single cell level (see Figure 3 and 6). Therefore it would no longer seem logical to introduce the yellow(-)KI mutant in Figure 1 while further data on this mutant are provided in the last two figures of the manuscript and discussed later in the manuscript (Figure 7 and 8). In addition we believe that co-expression of different transgenes (carrying fluorescent reporters) in the median and the distal lobes could potentially be interesting for certain applications. We believe that readers who might actually be interested in combining both transgenes in a cross would like to see the outcome to better evaluate the usefulness before experiments are planned and performed. This is especially true because localization as well as expression strength may differ between different fluorescence reporters while using the same promoter (e.g. the hGrx1-roGFP2 construct appears less bright and more localized to the apex of the distal-lateral lobes than dsRed, while expression of both reporters is driven by the aapp promoter in aapp-hGrx1-roGFP2 and aapp-DsRed, respectively).

      7- Figure 2:

      1. a) Is there anything known on the Sgs' size change overtime. It seems that between day 1 and 2 there is an increase of size and volume as much as I can evaluate the volume (Fig S4). Could that mean that there is increase in cell number in the lobes and therefore more cells expressing the transgene which would account for the signal intensity increase rather than more transcripts per cell? Thank you for this interesting question. The changes in the morphology of the salivary glands in Anopheles gambiae following eclosion have been studied in detail by Wells et al., 2017 (PMID: 28377572) which we cite now in the introduction (line 122-123). According to this reference, cell counts of the salivary gland are not changing upon emergence of the adult mosquito. However, we agree with the reviewer that the glands appear smaller and differ in morphology directly after eclosion. We noted that glands of freshly emerged females are more „fragile“ during dissections and lack secretory cavities, as reported by Wells et al., 2017. We believe that the increase in size occurs through the formation and filling of the secretory cavities which has been reported to take place within the first 4 days after emergence (Wells et al., 2017). This observation is in accordance with our observations that the promoters of the saliva proteins AAPP and Saglin display only weak activity after hatching, or, in the case of TRIO are not yet active directly after emergence. The timing of the formation of the secretory cavities is also in agreement with our time course experiment (Figure 2) which shows a strong increase in fluorescence intensity in dissected glands within the first 4 days after emergence.

      2. b) why choosing 24h after the blood meal to assess promoter activity in the Sgs? Do we have any information on how the blood meal impact on the Sgs'development. At this time anyway the sporozoites are far from being made. Yosshida and Watanabe 2006 mentioned at significant decrease of Sg proteins post-blood feeding. Could the authors detail their rationale based on what the questions they wish to address Thank you for this question. Unfortunately, the data available in the literature on this topic are very sparse, so we could only refer to few previous publications. The decision to quantify the fluorescence signals as early as 24 hours after blood feeding was based on Yoshida et al, Insect Mol. Biol, 2006, PMID: 16907827. The authors of this study generated the first salivary gland reporter line in A. stephensi by using the aapp promoter sequence to drive DsRed expression, and showed by qRT-PCR that DsRed transcripts increase 1-2 days after blood feeding compared to controls. Consistent with this observation and because we were concerned that putative changes in protein levels would only be visible for a short period of time, we began quantification one day after feeding. Since we observed significant changes in fluorescence intensity for the aapp-DsRed and sag(-)KI lines 24 hours after blood feeding, we retained the experimental setup and did not change it further. Nevertheless, we agree with the reviewer that different time points could help determine how long the effect lasts, and whether trio expression might also be regulated by blood feeding, but at a later time point. Still, our main objective here was to validate that the ectopic expression of DsRed driven by the aapp promoter in the aapp-DsRed line was indeed induced upon blood feeding as previously reported (PMID: 16907827). This experiment allowed us to confirm the inducibility of aapp in a different way and to show for the first time that saglin, but not trio, is induced one day after blood feeding. Our transgenic lines could be used for follow-up studies investigating the inducibility of salivary gland-specific promoters by different stimuli, or after infection with Plasmodium sporozoites. For example, for trio, transcription has been shown to increase after infection of the salivary gland by Plasmodium (PMID: 29649443).

      8- Figure 3: The figure is quite informative in terms of subcellular localization. Concerning the section "Natural variation of DsRed expression in trio-DsRed mosquitoes", I think it could be shortened because because it is a bit out of the focus the study.

      We agree with the reviewer that this part of the manuscript sticks a bit out and is not perfectly in line with the remaining results because it doesn’t deal with the salivary gland. Still, we would like to emphasise that in this work, we particularly want to show possible applications of the generated mosquito lines to address unanswered questions in host-parasite interactions and salivary gland biology. As a result, this manuscript establishes potentially important tools. For this reason, we feel it is important to mention the natural variation in DsRed expression, as this natural variation can have a significant impact on crossing schemes (especially with lines inheriting other DsRed-marked transgenes) and experiments (e.g. visualizing DsRed expression by western blot in larval and pupal stages). Furthermore, it is important for the use of the line to show that the transgene is inserted only once, at the expected location, which we try to emphasize with figure 4 – figure supplement 1 and figure 4 – figure supplement 2.

      We would also like to note that transgenesis in Anopheles is a relatively young field of research and altered expression patterns of ectopically used promoters have rarely been described so far, although this could have major implications e.g. in the case of gene drives. Therefore, we hope that the data shown will bring this previously neglected observation more into focus and highlight the importance of accurate characterization of generated transgenic mosquito lines.

      9- In contrast the last section of live imaging of P. berghei sporozoites in the vicinity and within salivary gland should be expanded. The 2 sentences summarizing the data are quite frustrating "We also observed single sporozoites moving actively through tissues in a back and forth gliding manner (Fig. 6B, Movie 3) or making contact with the salivary gland although no invasion event could be monitored"

      We have now implemented new data and extended Figure 8 showing the results of the in vivo imaging in a qualitative manner. We have rephrased the result and discussion section accordingly.

      10- I am aware of the technical difficulties to perform live imaging of sporozoite on whole mosquitoes, even when the salivary gland lobe under observation is closely apposed to the cuticle but that seems to be the final aim of the authors. I looked very carefully to the three movies and I am sorry but at this stage I could not make meaningful analysis out of them, and could not agree with the conclusions: for instances, the authors specify that sporozoites were undergoing back and forth movements (movie 3) but I do not see that and do not see the Sg contours in the available movies? The authors should also add bar and time scales to their movies. Having an in-depth description with regards to the sub-domain marked by a relevant reporter would strengthen the study, even if images are not collected in the whole mosquito to get higher resolution.

      We thank the reviewer for this comment. We have to admit that parasite imaging in fluorescent salivary glands in vivo is an ambitious goal given the complex biological system we are working with. We believe that the system presented in our manuscript is a first and important step to enable the analysis of the interaction of sporozoites with salivary glands, although in-depth analysis will require further optimization and considerable time, especially to generate quantitative data. Therefore, we now downstate the significance of our results in this respect and changed the title accordingly. Still, we also provide a more detailed analysis of the data we have already collected (Figure 8 and lines 406-427). Because we focus on the analysis of sporozoites in the thorax area in the revised manuscript, the outlines of the salivary gland are not necessarily visible in the images.

      I am not sure I understand the relevance of this quite condensed sentence in the text. Could the authors rephrase and expand if they wish to keep the issues they refer to. "The sporozoites' distinctive cell polarization and crescent shape, in combination with high motility, allows them to „drill" through tissues". I would stress more on the main unknown in terms of sporozoite-Sg interactions and the need to get right models for applying informative approaches (i.e. here, imaging).

      We thank you for this suggestion. The sentence mentioned has been removed in its entirety. We have also adjusted the text accordingly and reworded most of the introduction to make the narrative clearer (lines 91-119).

      Of note, it could help to point that the "Sgs is a niche in which the sporozoites which egress from the oocyst could mature and be fully competent when co-deposited with the saliva into the dermis of their intermediary hosts"

      We have now implemented a similar sentence in the introduction (lines 93-98).

      Reviewer #2 (Significance (Required)):

      1- Clear technical significance with the challenging molecular genetics achieved in the mosquito A. coluzzii.

      2- More limited biological significance: fair analysis and gain of knowledge of spatio-temporal of reporter expression under the selected promoter but limited significance of the final goal analysis which concerns the Plasmodium sporozoite biology once egressed from oocysts

      As stated above, we changed the title to place the focus on the engineered mosquito lines.

      3- Previous reports cited by the authors have used the DsRed reporter and the aap promoter in another Anopheles (i.e. A. stephensi, Yoshida and Watanabe, Insect Mol Biol, 2006; Wells and Andrew, 2019) which is also a natural host and vector for human Plasmodium spp.) with significantly more resolutive 3D visualization of GFP-fluorescent P. berghei but in dissected salivary glands and not in whole mosquitoes. The Wells and Andrew publication entitled "Salivary gland cellular architecture in the Asian malaria vector mosquito Anopheles stephensi" in Parasite Vectors, 2015 would deserve to be reference and described.

      Thank you very much for this suggestion. We considered citing Wells and Andrews (PMID: 26627194). However, this reference focuses very specifically on the subcellular localization of AAPP and shows only highly magnified sections of immunostained dissected and fixed salivary glands. Working only with the AAPP promoter, we felt it important to refer to the previously observed expression pattern along the entire salivary gland, as shown in Yoshida and Watanabe (PMID: 16907827). Nevertheless, we have cited two other publications by Wells and Andrews (PMID: 31387905 and 28377572) at various points in the manuscript.

      4- Audience: I would say that this work should be of interest of mostly scientists investigating Plasmodium biology (basic and field research) or in entomology of Diptera.

      5- To describe my fields of expertise, I can refer to my extensive initial training in entomology including at one point in the genetic basis of mosquito-virus interaction. I have also been working for more than 20 years in the field of Apicomplexa biology (Plasmodium and Toxoplasma) and I have long-standing interest in live and static high-resolution imaging.

      Reviewer #3 (Evidence, reproducibility and clarity (Required)):

      Klug et al. generated salivary gland reporter lines in the African malaria mosquito Anopheles coluzzii using salivary gland-specific promoters of three genes. Lobe-specific reporter activity from these promoters was observed within the salivary glands, restricted either to the distal lobes or the medial lobe. They characterized localization, expression strength and onset of expression in four mosquito lines. They also investigated the possibility of influences of the expressed fluorescent reporters on infection with Plasmodium berghei and salivary gland morphology. Using crosses with a pigmentation deficient mosquito line, they demonstrated that their salivary gland reporter lines represent a valuable tool to study the process of salivary gland colonization by Plasmodium parasites in live mosquitoes. SG positioning close to the cuticle in 20% of females in this strain is another key finding of this study.

      The key findings from this study are largely quite convincing. The authors have created a suite of SG reporter strains using modern genetic techniques that aid in vivo imaging of Plasmodium sporozoites.

      Vesicular staining within salivary acinar cells should be stated as "vesicle-like" staining unless a co-stain experiment in fixed SGs is conducted using antisera against the marker protein(s) and antisera against a known vesicular marker (e.g. Rab11). It may also be possible to achieve this in vivo using perfusion of a lipid dye (e.g. Nile Red), but this is not necessary. As is, in Fig. 3A, there are images in which it appears that the vesicle-like staining is located both within acinar cells' cytoplasm and in the secretory cavities (e.g. Fig. 3A: aapp-DsRed bottom and middle), and this is fine, but should be more inclusively stated. Fixed staining of the reporter strain SGs would allow for clarification of this point. In previous work, other groups have observed vesicle-like structures in both locations (e.g. PMID: 33305876).

      Thank you very much for this suggestion. Indeed, when we observed the vesicle-like localization, we had similar ideas and considered investigating the identity of the observed particles in more detail. Ultimately, however, we concluded that the localization of DsRed does not play a critical role in the use of the lines as such and believe that a more detailed investigation of the trafficking of the fluorescent protein DsRed is beyond the scope of this study.

      We have thus followed the suggestion of the reviewer and now use the phrase „vesicle-like“ throughout the manuscript. In addition, we extended the discussion on the different localizations observed and presented some explanations that might have led to this observation. We also included a new reference that investigated the localization of AAPP using immunofluorescence (PMID: 28377572).

      Morphological variation is extensive among individual mosquito SGs, thought to impact infectivity, and well documented in the literature. The manuscript should be edited to make it much clearer (e.g. n = ?) exactly how many SGs, especially in microscopy experiments, were imaged before a "representative" image was selected from each data point and in any additional experiment types where this information is not already presented. Figure S8 is an example where this was done well. Figure 3A-B is an example where this was not well done. All substantial variation (e.g. "we detected a strangulation..." - line 189) across individual SGs within a data point should be noted in the Results. Because of the genetics and labor involved, acceptable sample sizes for minor conclusions may be small (5-10), but should be larger for major conclusions when possible.

      Thank you for this comment. We have improved this point by specifying precisely the number of samples and of repetitions in the respective figure legends. For example, we have now quantified the proportion of moving sporozoites and report both the number of sporozoites evaluated and the number of microscopy sessions required (see Figure 8).

      Thank you for this comment. We have improved this point by specifying precisely the number of samples and of repetitions in the respective figure legends. For example, we have now quantified the proportion of moving sporozoites and report both the number of sporozoites evaluated and the number of microscopy sessions required (see Figure 8). Regarding Figure 3, fluorescence expression and localization in salivary gland reporter lines was actually very uniform in each line. We added the following sentence in the legend of revised figures 3 and 5: “Between 54 and 71 images were acquired for each line in ≥3 independent preparation and imaging sessions. Representative images presented here were all acquired in the same session”.

      Sporozoite number within SGs has been shown to be quite variable across the infection timeline, by mosquito species, by parasite strain, in the wild vs. in the lab, and according to additional study conditions. The authors mention that the levels they observed are consistent with their prior studies and experience, but they did not utilize the reporter strains and in vivo imaging to support these conclusions, instead relying on dissected glands and a cell counter. It is important for these researchers to attempt to leverage their in vivo imaging of SG sporozoites for direct quantification, likely using the "Analyze Particles" function in Fiji. The added time investment for this additional analysis would be around two weeks for one person experienced in the use of the imaging software.

      Thank you for this interesting suggestion. Indeed, it would be beneficial to use an imaging based approach to quantify the sporozoite load inside the salivary glands. We already used „watershed segmentation“ in combination with the „Analyze Particles“ function in Fiji on images of infected midguts to determine oocyst numbers. Still, we believe this analysis cannot be applied to images of infected salivary glands mainly because of differences in shape and location of the oocyst and sporozoite stages. Sporozoites inside salivary glands form dense, often multi-layered stacks. Because of this close proximity, watershedding cannot resolve them as single particles which could subsequently be counted. This creates an unnecessary error by counting accumulations of sporozoites as one, likely leading to an underestimation of actual parasite numbers. Furthermore, given that the proximity issue could be resolved e.g. by performing infections yielding lower sporozoite densities, another problem would be that infected salivary glands prepared for imaging are often slightly damaged leading to a leak of sporozoites from the gland into the surrounding. These leaked sporozoites are likely not included on images which would then be used for analysis, potentially leading again to an underestimation of counts. Since these issues are circumvented by the use of a cell counter, we believe that this method is still the method of choice in acquiring sporozoite numbers.

      Nevertheless, we can understand the reviewer's concern that counts performed with a hemocytometer do not reflect the variability in the sporozoite load of individual mosquitoes. To highlight that all generated reporter lines can have high sporozoite counts, we have now included images of highly infected salivary glands for each line in Figure 7D.

      This manuscript is presented thoughtfully and such that the data and methods could likely be well-replicated, if desired, by other researchers with similar expertise.

      The statistical analysis is appropriate for the experiments conducted. It is currently unclear if some experiments were adequately replicated. That information should be added to the paper throughout where it is missing.

      We do appreciate your comments on our efforts to give all required information for other laboratories to replicate our experiments. We have added the missing information about the number of independent experiments in the respective figure legends wherever appropriate.

      Studies from multiple groups should be more thoroughly referenced when the authors are describing the "vesicle-like" staining patterns observed in SGs from reporter strains (e.g. Fig. 3A). Is this similar to the SG vesicle-like structures observed previously (e.g. PMIDs: 28377572, 33305876, and others)?

      Thank you for this comment. We did not discuss this observation in detail in the first version of our manuscript because the observed localization was rather unexpected, as DsRed was not fused to the AAPP leader/signal peptide. The observed localization is therefore difficult to explain, however, we have expanded the discussion on this (lines 465-482) and now cite one of the proposed references (PMID: 28377572, lines 468-469).

      There are minor grammar issues in the manuscript text (e.g. "Up to date" should be "To date"). The figures are primarily presented very clearly and accurately. One minor suggestion: In cases such as Fig. S2A images 3 and 6, where some of the staining labels are very difficult to read, please move all labels for the figure to boxes located directly above the image.

      We are sorry for the grammatical errors we have missed in the first version of our manuscript. We have now performed a grammar check over the whole manuscript. We have also increased the font size of the captions in the above figures and tried to make them better readable by moving the captions over the images.

      The data and conclusions are presented well.

      Reviewer #3 (Significance (Required)):

      This report represents a significant technical advance (improved in vivo reporter strain and sporozoite imaging), and a minor conceptual advance (active sporozoite active motility), for the field.

      This work builds off of previous SG live imaging studies involving Plasmodium-infected mosquitoes (e.g. Sinnis lab, Frischneckt lab, etc.), addressing one of the major challenges from these studies (reliable in vivo imaging inside mosquito SGs).

      This work will appeal to a relatively small audience of vector biology researchers with an interest in SGs. Many in the field still see the SGs as intractable, instead choosing to focus on the midgut due to ease of manipulation. Perhaps work like this will spark new interest in tangential research areas.

      I have sufficient expertise to evaluate the entirety of this manuscript. Some descriptors of my perspective include: bioinformatics, SG molecular biology, mosquito salivary glands, microscopy, RNA interference, SG infection, and SG cell biology.

      Reviewer #4 (Evidence, reproducibility and clarity (Required)):

      Klug et al generated transgenic mosquito lines expressing fluorescent reporters regulated by salivary gland specific promoters and characterized fluorescent reporter expression level over the time, subcellular localization of fluorescent reporters, and impact on P. berghei oocyst and salivary gland sporozoite generation. In addition, by crossing one of the lines (aapp-DsRed) with yellow(-) KI mosquitoes, they open up the possibility to perform in vivo visualization of salivary glands and sporozoites.

      Overall the generation and characterization of these transgenic lines is well-done and will be helpful to the field. However, there are several concerns with the in vivo imaging data shown in Figure 6, which does not convincingly show fluorescent sporozoites in the lobe or secretory cavity of a fluorescent salivary gland lobe. This needs to be addressed. Points related to this concern are outlined below:

      (1) Although the authors mention that the DsRed signal was strong enough to see with GFP channel, it would be more appropriate to show that the DsRed signal from salivary glands and GFP channel image co-localize.

      We now show a merge of the GFP and DsRed signal in Figure 7 – figure supplement 2 The yellow appearance of the salivary gland in the merge likely indicates the spillover of the DsRed signal into the GFP channel. In addition we discuss the issue in lines 416-412 and 565-567.

      (2) Mosquitoes were pre-sorted using the GFP fluorescence of the sporozoites on day 17-21. From figure 4B, median salivary gland sporozoite number was about 10,000 sporozoites/mosquito on day 17-18. However, in Figure 6A there are no sporozoites in the secretory cavities. They should be able to see sporozoites in the cavities at this time. Can the authors confirm that they can visualize sporozoites in secretory cavities in vivo and perhaps show a picture of this.

      This is entirely correct. We also examined mosquitoes for the presence of sporozoites in the salivary glands and wing joints prior to imaging, as shown in Figure 7B and Figure 7 – figure supplement 2A, to increase the probability that sporozoites could be observed. Nevertheless, the area of the salivary gland that comes to the surface is often small and limited to a few cells that can be imaged with good resolution. Unfortunately, these same cells were often not infected although other regions of the salivary glands must have been very well infected based on the previously observed GFP screening (Figure 7B). In addition, with the confocal microscope available to us, we struggled to achieve the necessary depth to image sporozoites in the cavities of the salivary gland cells. For this reason, we were often able to detect a strong GFP signal in the background, but not always to resolve the sporozoites sufficiently well. Still, we have now included an image showing sporozoites in salivary glands (Figure 8C). However, we believe that the method can be further improved to be more efficient and provide better resolution. We discuss possible ways to further improve the imaging in lines 563-586.

      (3) There is no mention of the number of experiments performed (reproducibility) and no quantification of the imaging data. In the results (line 287-288), the authors state that sporozoites are present in tissue close to the gland and sometimes perform active movement. How can this be? Do they believe these sporozoites are on route to entering? More relevant to this study would be a demonstration that they can see sporozoites in the secretory cavities of the salivary gland epithelial cells, this should shown. If they have already performed a number of experiments, I would suggest to do quantification of the number of sporozoites observed in defined regions . The mention that sporozoites are moving is confounded by the flow of hemolymph. How do they know that the sporozoites are motile versus being carried by the hemolymph. Perhaps it's premature to jump to sporozoite motility in the mosquito when they haven't even shown sporozoite presence in the salivary glands.

      Thank you very much for this comment. We have followed the suggestions of the reviewer and have now quantified the behavior of sporozoites in the thorax area of the mosquito. For the analysis, we only considered sporozoites that could be observed for at least 5 minutes. This analysis revealed that 26% of persistent sporozoites performed active movements, which in most cases resembled patch gliding previously described in vitro. We adjusted the results section accordingly. In addition, we have changed the figure legend to accurately indicate the number of experiments performed. Likewise, we now also provide an image of sporozoites that we assume are located in the salivary gland (Figure 8C). Although we have not yet been able to image and quantify vector-sporozoite interactions extensively (further improvements would be required, as mentioned previously), we believe these results illustrate the potential of the transgenic lines.

      (4) In vivo imaging has been performed with the mosquito' sideways. Was this the best orientation? Have you tried other orientations like from the front (Figure 5B orientation).

      It is true that in the abdominal view as shown in Figure 7B the fluorescence in the salivary glands is very well visible. This is mainly due to the fact that in this area the cuticle is almost transparent and therefore serves as a kind of "window". Nevertheless, the salivary glands are not close to the cuticle in this position, which makes good confocal imaging impossible. Imaging always worked best where the salivary gland was very close to the cuticle, and this was always laterally. However, there were differences in the position of the salivary glands in individual mosquitoes, which also led to slight differences in the imaging angle.

      Overall, the text is easy to follow and I have only few suggestions.

      Thank you for this comment.

      In the result section, the authors describe the DsRed expression during development of mosquito (line 194-236) after they describe subcellular localization of fluorescent reporters. I felt the flow was disrupted. Thus, this part (line 194-236) could summarize and move to line 135. In this way, the result section flow according to the main figures.

      Thank you very much for this suggestion. We have considered your idea, but based on the changes we have made in response to reviewer comments and new data implemented in the form of two new figures, we believe the current order in the results section is more appropriate. The rationale was primarily to first characterize the expression of fluorescent reporters in the salivary glands of all lines before going into more detail on expression in other tissues of a single line. We then finish with potential applications like in vivo imaging of sporozoite interactions with salivary glands.

      Also, and as mentioned previously (reviewer 2, point 8), we believe it is important to describe the variability of ectopic promoter expression at a given locus with sufficient details, as this has not been characterized thus far despite its importance.

      In the result section, text line 186-190, the authors describe the morphological alternation of salivary gland in aapp-hGrx1-roGFP2. I would suggest to mention that this observation was only in one of lateral lobe. (I saw that it was mentioned in the figure legend but not in the main text.)

      We believe there has been a misunderstanding. The morphological alteration in salivary glands expressing aapp-hGrx1-roGFP2 was observed in all distal-lateral lobes to varying degrees (quantification in Figure 6E). To include as many salivary glands as possible in the quantification and because in some images only one distal-lateral lobe was in focus, only the diameter of one lobe per salivary gland was measured and evaluated. We have now revised the legend to prevent further misunderstandings.

      In the discussion section, author discuss localization of fluorescent reporters (line 322-331). When I looked at aapp-DsRed localization pattern (Figure 3A), the pattern looked similar to the previous publication by Wells et al 2017 (https://www.nature.com/articles/s41598-017-00672-0). This publication used AAPP antibody and stain together with other markers (Figure 4-7). This publication could be worth referring in the discussion section.

      Thank you for this suggestion. According to the information available through Vectorbase, we did not fuse DsRed with any coding sequence of AAPP that could potentially encode a trafficking signal. Therefore, it is rather unlikely that the observed DsRed localization in our aapp-DsRed line and the localization observed by AAPP immunofluorescence staining in WT mosquitoes match. This is further exemplified by the cytoplasmic localization of hGrx1-roGFP2 in the aapp-hGrx1-roGFP2 line, where the reporter gene was cloned under the control of the same promoter. For this reason, we had not mentioned this reference in the first version of the manuscript. In the revised manuscript, we have included now the suggested reference (lines: 475-476) and extended the discussion on possible reasons which led to the observed localization pattern.

      In the text, authors describe salivary gland lobes as distal lobes and middle lobe. It would be more accurate to refer to the lobes as the lateral and medial lobes. The lateral lobes can then be sub-divided into proximal and distal portions. I would suggest to use distal lateral lobes, proximal lateral lobes and median lobe as other references use (Wells M.B and Andrew D.J, 2019).

      Thank you for this suggestion. We have corrected the nomenclature for the description of the salivary gland anatomy as suggested throughout the manuscript.

      Overall, the figures are easy to understand and I have following suggestions and questions.

      Figure 1C) It is hard to see WT salivary gland median lobe. If authors have better image, please replace it so that it would be easier to compare WT and transgenic lines.

      We have replaced the wild-type images of salivary glands in this figure and labeled the median and distal-lateral lobes accordingly (see Figure 1).

      Figure 2) While it was interesting to observe the significant expression differences between day 3 and day 4, have you checked if this expression maintained over time or declines or increases (especially on day 17-21 when author perform in vivo imaging)?

      Thank you for this interesting question. We have not quantified fluorescence intensities in mosquitoes of higher age. Nevertheless, we regularly observed spillover of DsRed signaling to the GFP channel during sporozoite imaging, suggesting that expression levels, at least in aapp-DsRed expressing mosquitoes, remain high even in mosquitoes >20 days of age (see Figure 8A). We also confirmed this observation by dissecting salivary glands from old mosquitoes, whose distal lateral lobes always showed a strong pink coloration even in normal transmission light (data not shown).

      Figure 3A) There is no description of "Nuc" in figure legend. If "nuc" refers to nucleus, have you stained with nucleus staining dye (example, DAPI)?

      Thank you for spotting this missing information in the legend. Initial images shown in this figure were not stained with a nuclear dye. To test whether the observed GFP expression pattern really colocalizes with DNA, we performed further experiments in which salivary glands from both aapp-hGrx1-roGFP2 and sag(-)KI mosquitoes were stained with Hoechst. We have now included these new data in Figure 3 - figure supplement 1. It appears that GFP is concentrated around the nuclei of the acinar cells, which makes the nuclei clearly visible even without DNA staining.

      Figure 4B) The number of biological replicates in the figure and the legend do not match (In the figure, there are 3-5 data points and, in the legend, text says 3 biological replicates.)

      Thank you for spotting this inconsistency. The number of biological replicates refers to the number of mosquito generations used for experiments. The difference is due to the fact that sometimes two experiments were performed with the same generation of mosquitoes using two different infected mice. We have clarified the legend accordingly to avoid misunderstandings.

      Figure 4C) The number of data points from (B) is 5. However, in (C) only 4 data points are presented.

      We have corrected this mistake. In the previous version, the results of two technical replicates were inadvertently plotted separately in (B) instead of the mean.

      Figure 5) I would suggest to have thorax image of P. berghei infected mosquito to show both salivary glands and parasites.

      Thank you for this suggestion. Images in Figure 7B (previously Figure 5) were replaced with an infected specimen to show salivary glands (DsRed) and sporozoites (GFP) together.

      Reviewer #4 (Significance (Required)):

      The transgenic lines that authors created have potential for in vivo imaging of salivary gland and sporozoite interactions. Since the aapp and trio lines have distinct fluorescence expression, they could help elucidate why sporozoites are more likely to invade distal lateral lobes compare to median lobe.

      My areas of expertise are confocal microscope imaging, mosquito salivary gland and Plasmodium infection and sporozoite motility.

    1. the two questions that we hopefully would uh try to answer with with this r d program is and and one of this i already 00:56:53 mentioned but out of all conceivable designs for societal systems so so so this isn't about capitalism versus socialism or something like that there's like i would think there's an unlimited 00:57:05 potential we're creative we're creative people there would be a million varieties of of societal systems and integrated societal systems that we might come up with 00:57:17 and some of those probably would work very well and some of them probably would work very poorly um so among those what what might be among the best and not the the single best that's not the purpose either it's not just to find one thing that works is 00:57:30 to find like a you know more of a a variety a process of things a mix mishmash of things that community the communities can choose to implement that you know 00:57:43 works well for them and that suits them and that works well for their neighbors and works well forever it works well for the whole really

      Two questions to answer:

      1. out of all the conceivable societal systems possible, which are suited to a community? This is not one size fits all.

      This requires careful consideration. There cannot be complete autonomy, as lack of standards will make things very challenging for any inter-community cooperation.

      Cosmolocal framework (https://clreader.net) as well as Indyweb Interpersonal computing could mediate discussion between different community nodes and emerge common ground

    2. so now finally we get to active inference all this discussion and we're finally getting to the point here right for his lab so um i had and i had already touched on 01:47:35 some of this before but um it would you know today if you're going to develop a really good ai system you're and you're going to have a you have a robot saying the robot has to act 01:47:47 in some environment it is pretty well understood that that if you program that robot to you give it a you give it a i mean traditionally you'll give it a a a fitness function or some kind of 01:47:59 valuation function and it's for example it's good if it it you know you lose points if you fall through a trap door and is and you get points if you uh you know whatever 01:48:11 find find the piece of cake or something well that's uh that's fine for extremely simple universes that your robot might work in but as soon as you get beyond you know as soon as you get to any kind of more realistic uh 01:48:24 universe that your robot has to work in that pre-programming pre-programming concept just kind of falls apart it is you you it would require the the the practitioner to think ahead of all the 01:48:37 things that the robot might encounter and then how to value certain you know value those situations in certain ways uh and that is really uh what active inference 01:48:49 offers is a is a kind of a cognitive understanding or a mechanism by which an organism will uh uh where its 01:49:00 fitness score is in a sense involves both uh you know achieving goals and exploring its world to for for for epistemic gain so 01:49:16 um that's what we would like the that's how we would like to program the robot in a sense so that it can learn from it can learn on the fly from its experiences it can it can alter its actions and 01:49:30 goals as it be as it becomes clear as it gathers more information from its universe as it as it meets new situations that were never never conceived of by the by the 01:49:42 programmer that it through through an active inference or an active inference like uh you know mechanism it can learn and explore and and critically balance exploration with 01:49:54 exploitation and then we come right back to that whole concept of criticality so you know what you would really like your robot to do is remain at that critical uh phase between 01:50:06 exploring what's out there and making use and gold directed behavior of what's in front of it and um and uh you know that's how you could program this world this robot to act in the world and be pretty good at 01:50:20 it you know if you if you build it well so that's what the systems of a society can help a society to do you you don't you it's worth talking about building new systems i think it would not be wise to say 01:50:32 this checklist of like we wanted this level of education we want to want this you know to react this way in this situation react this way in this situation and this level of uh you know whatever money and this level of this and this 01:50:45 level of that while those kinds of preferences can be a useful start society has to be alive in its moment you know in the moment as society is alive it's cognating it's 01:50:57 it's it's it's actively uh you know comparing what it's the result of its actions to the model that is in its head and uh so active inference offers this way 01:51:09 to uh to balance uh exploration and and uh and uh exploitation and remain critical and remain optimally cognitive right so that's part of it 01:51:24 uh and then part of it i mean and for me this the the the idea of the embodied uh you know the three four e's uh this is what i really am attracted to in 01:51:46 active inference is in a sense it's kind of a simple concept it's not really very complicated you know if you've studied bayesian uh theory it all it's kind of straight you know in a way it's kind of straightforward 01:51:58 but the the you know the way fristen has connected the dots and and and and uh extended that into the bigger picture of life kind of it it to me it is uh it is rich 01:52:11 there's a there's a lot yet to be learned and gained and explored in this umbrella of active inference

      Active inference is exemplified using a robot, but is really a model of how humans learn, process information and make decisions in the world.

    3. we'll go into an example here with self-organized criticality so the idea 01:40:03 there is that was coined by back back bak in 87 the term self-organized criticality and it's it's really not a controversial that that living systems and 01:40:16 and many most systems in life complex systems organize in some way but the idea of self-organized criticality is that the organism itself is adjusting is is keeping some kind of adjustment 01:40:28 uh to uh to maintain a critical state and by critical state i mean a state on the ver like you can think of a saddle point so if you drop a model on us on a saddle it's going to not stay there it's going to you know 01:40:41 it's going to change it's going to change one way or the other right so a critical state is like that that threshold where things are about to change from one way to another way and uh 01:40:55 it turns out with you know work and information theory and other other fields of recent in recent years it turns out that uh processing uh whether it's we're talking about a computer or some other 01:41:07 you know machine or or a brain turns out that processing is kind of optimal in a sense when this when the system is at a this this this this critical state and 01:41:21 some people call it on the edge of chaos because things are things can easily change and sometimes it's you can think of that threshold as a 01:41:33 as a as a threshold of a critical state you can think of it as a threshold of the threshold we say between exploration and exploitation like should i should i go should i go 01:41:45 find a new planet for humans to live on or should i fix the planet that you know should i fix the systems on this planet first you know how do we balance exploration of the new versus using the information we have to improve 01:41:59 what we already have so you can think of that as exploration exploitation trade-offs stability agility trade-off do we do we remain stable and use ideas from the old in the past or do we are we more agile and we're more 01:42:13 flexible and we bring in new new ideas so it's like you can call it old new trade old new trade-off but whatever whatever trade-off you want to call it it's this sitting at the edge of going one way or the other 01:42:26 maximally flexible of going one way or the other and it's at that threshold that level that point the kind of that region of criticality that information processing seems to be 01:42:39 maximal so if uh it's no wonder then that the human brain is is organized in such a way to be living on this threshold between agility and stability 01:42:52 and uh now here's an example of that from like a real world example so a a system that is at a critical state is going to be maximally 01:43:03 sensitive to input so that means that there could you know when just when that marble is sitting on the saddle just a little bump to that saddle from one little corner of its universe and right like just one 01:43:16 little organism bumps it and maybe that marble rolls one way or the other right so that one one little input had a major impact on how the whole thing moves 01:43:29 its trajectory into the future right but isn't that what we isn't that kind of what we have in mind for democracy i mean don't we want everyone to have access of engaging into the decision-making processes 01:43:43 of a society and have every voice heard in at least in the sense that there's the possibility that just my voice just me doing my participation in this system might actually 01:43:56 ripple through the system and have a uh you know a real effect a useful effect i mean i think like maybe maybe self-organized criticality can help to inform us the concept of 01:44:10 self-organized criticality can help to inform us of what do we want from democracy or a decision-making process right you know that just makes me think about different like landslides 01:44:22 and that's something that criticality theory and catastrophe theory has been used to study and instead of cascading failure we can think about like cascading neighborhood cleanups so a bunch of people just say 01:44:34 today just for an hour i feel like doing a little cleanup and all of a sudden one person puts up the flag and then it's cascading locally in some just you know unspecified way but all of a sudden you're getting this this distribution with a ton of small 01:44:48 little meetups and then several really large sweeping changes but the total number of people cleaning up is higher because you offered the affordance and the ability for the affordance to sort of propagate 01:44:59 that's right that's right we're talking about a propagation of of a propagation of information a propagation of action and the possibility that even uh you know just one or a few individuals could start a 01:45:12 little chain chain reaction that actually does affect in a positive way society now it's a little too it's almost too bad that sand piles were the original uh you know topic of 01:45:23 of this of self-organized criticality because as you point out it's not really about things falling apart it's about it's about if you think of again if you think of a complex system as a system more capable of solving more 01:45:37 challenging problems then more often you can think of self-organized criticality as a way to propagate information when it is really needed when the system needs to change 01:45:51 uh then information is you know it ingests information from its world from its senses and can act accordingly we we just um submitted an abstract with criticality and active inference and one 01:46:04 of the points was actually the existence of self-organized criticality implies a far from equilibrium system that's actively pumping energy in that's because it's a passive system that's not 01:46:15 locked and loaded

      Example of self-organized criticality. It is a bit reminscent of social tipping points. The one variable that is not so discussed here, which would enrich it is the idea of progress traps as a gap between finite human, anticipatory models vs the uncountable number of patterns and possible states of the universe.

    4. now we talk i talk about a few ideas good regulators requisite variety self-organized criticality and then the 01:35:04 free energy principle from active inference um and uh maybe i'll just try to briefly talk mention what's what those means for what those ideas mean for people who 01:35:15 aren't familiar so good regulator really came from the good regular theorem or whatever it's called really came from cybernetics ash ashby yeah a lot his law of requisite 01:35:33 variety and uh the it's the concept is that a organism or a you know a system must be must be a model of that which it but 01:35:47 that needs to control

      These are technical terms employed in this model: * Good regulators * Requisite variety * Self-organized criticality * Free energy principle

    5. there's a lot of discussion about complex systems you know we've been discussing complex systems and i just want to make a couple of points here because uh 01:31:28 commonly some it is not uncommon that someone will say a complex system well that just means that it's liable to fall apart at any moment you know it's just too complex it's going to crash uh but and that that obviously can 01:31:41 happen you know systems can collapse quite quite true but obviously life would not be doing very well if the if if the evolution builds complexity 01:31:53 in species and you know in organisms and ecosystems if life would be have a rough go of it if it was so fragile that uh complexity became a 01:32:07 burden and and uh you know come and then you know you reach a certain level of complexity and then you fall apart that's not really i don't think i mean that can happen but that's but but complex useful complexity 01:32:19 doesn't make you fall apart it actually just does the opposite it serves what we've been talking about all along and that's problem solving so we are anticipatory organisms we are problem 01:32:31 solving organisms it's our nature most of what the human brain does is to solve problems of one kind or another social problems physical problems whatever and maneuver in the world you 01:32:43 know in a useful way and complexity is what allows that there's a number of studies that i cite here that show that as an organism even as a robot you know 01:32:56 faces uh more difficult pressures from its environment it complexifies and complexifies by complexity then it's it's it implies 01:33:08 a greater number of parts coordinating or cooperating in some way uh to you know solve this new challenge and obviously as a human we're very complex we have 01:33:22 we have complex needs we have we can think not just what's going to happen in the next millisecond but what's going to happen we can think about what's going to happen in 100 years i mean part of this project is to think about what might be 01:33:36 happening over the next hundred years or even a thousand years so as an organism complexifies it become it at least potentially becomes a better adapted to solving more complex 01:33:49 problems so you could and from that sense you could almost ex equate complexity with problem-solving capacity you know at least in a uh you know in a 01:34:01 general sense and then i talked about well that just reminds me of in the free energy calculations that we um have gone over in various papers it's like accuracy is the modeling imperative and 01:34:14 then complexity is tolerated to the extent it facilitates accurate modeling so if you get the one parameter model and you got 99 and it's adequate and it's good then you're good to go and you're gonna go for simplicity 01:34:26 but then what you're saying is actually the um appearance and the hallmark of complexity in the world it means that that organism has the need to solve problems at a given 01:34:40 level of counterfactual depth or inference skill or temporal depth temporal thickness

      While complex systems these days has connotations of being more fragile or more challenging to fix, In evolutionary biology, complexity has evolved in organisms to make them more adaptable, more fit. Human beings are complex organisms. Most of our brain is dedicated to solving one type of problem or another, we anticipate the world and problem solving involves choosing the best option based on anticipation and our models.

    6. society by you know by uh uh you know it's just that's necessarily shares a similar related intrinsic 01:29:58 purpose which is to achieve and maintain vitality maintain and maintain and by maintain i mean anticipate into the future maintain vitality which is accomplished through 01:30:11 cognition and cooperation so the self that we must keep vital is the extended self and it follows that the intrinsic purpose of societal systems like financial systems and other is to serve the intrinsic purpose of society

      Similiarly, the intrinsic purpose of a society as an individual organism, a superorganism is to maintain vitality and sustain a flourishing of itself, including its extended self through its cognitive architecture - sensing, evaluating, modeling, anticipating and taking action.

    7. what is our purpose so so so over on the right there i just want to reemphasize we are anticipatory we are cognitive we are problem solvers 01:25:44 we are a we and then i have below that i am a we you know like i i am i can i am i'm intimately connected with this i'm i'm everyone in that sense you know 01:25:57 yep yeah the whitman um you know i contain multitudes and also gilbert at all i have a paper called um we were never individuals kind of on that wavelength that you were talking about with the sort of distributed systems all the way down 01:26:09 approach and also dennis noble no privilege level of biological causality similar uh basically realization that multi-scale perspective complexity science basically entails 01:26:22 either the choice of a priori level like saying it is multi-scale and humans are the best scale or gaia is the scale or quantum is the right scale that's a claim as well as it being a claim 01:26:35 actually there's no privileged level of causality so that's the sort of table as it's said right right right right right and you know what it's not that really 01:26:46 this this entire project you could say in like a sentence you could say this whole project is to help us be who we are more be more uh honestly who we are more real 01:27:01 to who we are right it's not the it's not to to have people behave in some unusual way or some altruistic way or anything like that it is it is to have 01:27:12 it is to be more more ourselves more fully ourselves more completely ourselves and then all of these pages all these things we're talking about is who that self is who who are we really and it's about the 01:27:25 adjacent possible for who we are who we are is not an essence that is uh there's uh seven seals and it's being unlocked it's actually something that's being drawn out through 01:27:36 inactive realization in the niche through niche modification through stigma through becoming and and then the adjacent possible is where the imagination and the planning comes into play and if people are hesitant to talk about 01:27:49 the adjacent possible for who we could be just think about chess it's the adjacent possible with the strategy on the board and we're talking about the adjacent strategy possible for who we could be in terms of our strategy 01:28:02 for you know all these recursive layers our strategy for how we think of ourselves and all these other things you're talking about absolutely absolutely and then and then ultimately serving the 01:28:13 serving the kind of fitness purpose of you know if we take action a is that going to reduce our uncertainty about those things that we that really matter you know that are that are the the 01:28:27 the key variables

      Consciousness is the psychological aspect existent at one level of a multi-level human pyscho-biological-cultural INTERbeCOMing gestalt.

    8. what is our worldview what do we value and what is our purpose and then we've come to this question then okay so who the heck are we then you know we're we're and it and not only who are we but 01:22:42 who are we building these systems for you know what what what is what should societal system serve who or what should societal systems serve and the only reasonable answer that you 01:22:55 can come up with is that societal systems should serve the the extended self like not just the body not just the family not just the 01:23:07 you know the thousand people in a society or the ten thousand or a million or whatever but their environment the the society next door that they're engaged with and cooperating with and coordinating with 01:23:18 the society across the planet that they're sharing information with and learning together with and so it's the whole that we are metrics as we as leaders who 01:23:30 come to metrics those metrics have to represent both the cognitive process how good how are we cogniting how well are we cogniting are we functionally cogniting and are we 01:23:43 achieving through that cognition are we achieving the kinds of aims that is serving the whole is the environment improving is the you know quality of air improving is the quality of life 01:23:56 improving for individuals right um yes so uh we are so this in a nutshell we this is the world view in a way we are in intimate with our greater world we are individuals but of the nested overlapping variety 01:24:12 individual cells bodies groups communities ecologies nations and all of civilization we're not separate in any absolute sense and there's no privileged level or scale to any of that nor are we passive bystanders in this 01:24:25 unfolding this is not this evolution is not it's just a chance thing like by chance somebody does this one day and then evolution goes on another another avenue no there there are 01:24:38 there are opportunities in the environment uh that we can react to that lend themselves to to to providing 01:24:50 information or providing gain of benefit of some kind and and you know we are driven we are are we are consciously creating and you know 01:25:03 even a really great societal system that integrated societal systems would be consciously creating acting cognitive acting cognitive and consciously creating and it towards some 01:25:15 towards some goal and that goal then has to be you know the maintaining of vitality being the in the for the extended sel

      The societal system is designed to serve the extended self, which includes all the aspects of the environment outside the self (individual), like the environment, other people, other species, etc.. related to the concept of the INTERbeing or INTERbeCOMing

    9. that's something that insect with a six-legged version is now it's the whole super organism oh well the ant colony is society that whole frame is actually the shadow of 01:18:52 what the evolutionary reality is which is that the ant colony is an organism not a super organism and the ants are tissues and so which level we prioritize or do we say no there's no a priori level ant is just i'm not even 01:19:05 gonna say there's anything out there called ants it's it's you how you're thinking about it or do we get lost or are we going to find a ladder in that multi-scale yeah well the latter is you know the 01:19:17 latter is active inference because it doesn't say active inference doesn't say make a make an internal model of the world that is accurate that actually accurately captures all the all the 01:19:30 details of the world of the universe that's not the point that's not what the mind does that's not the point the point is to act under uncertainty 01:19:41 given some useful model of the world act under uncertainty so that your fitness score improves and by fitness score here we essentially mean you know and anticipated uncertainty so i i would 01:19:55 very much like to be have some certainty that i'm going to be alive tomorrow and if it's freezing outside and i don't have a coat on uh you know that that becomes iffy so uh i'm going to be happy 01:20:06 if i'm going to be i'm going to go find a coat because it is going to reduce my uncertainty about survival over the next 24 hours but you can expand that you know outward right we we need to act the all organisms are acting under 01:20:19 uncertainty and and we can think about that as we can think about that um we can think from that perspective as a society of what are we doing and how do we measure 01:20:31 success well we're measuring success by acting under uncertainty and then re and then paying attention to what happens and then acting the same or differently or you know some other way or somehow some were 01:20:45 then choosing to act again in this cycle of act uh you know act uh process act process act process you know model act model act model that reminds me of course of the ooda 01:20:57 observe orient side act model and other sort of cyclic models of action and perception and then i would say that active inference provides a few nice little benefits over other phrasings of action and perception 01:21:10 qualitative and philosophical ones like inactivism as well as quantitative ones like cybernetics and other kinds of control theories so i totally agree this

      An organism acts under uncertainty to reduce it to meet its objective.

    10. there was an interesting paper that came out i cited in the in my in my in paper number one that uh was 01:15:53 looking at this question of what is an individual and they were looking at it from an information theory standpoint you know so they came up with this they came up with this uh uh theory uh and i think do they have a name for 01:16:09 it yeah uh information theory of individuality and they say base it's done at the bottom of the slide there and they say basically that uh you know an individual is a process just what's 01:16:20 what we've been talking about before that propagates information from the past into the future so that you know implies uh information flow and implies a cognitive process uh it implies anticipation of 01:16:33 the future uh and it probably implies action and this thing that is an individual it is not like it is a layered hierarchical individual it's like you can draw a circle around 01:16:45 anything you know in a certain sense and call it an individual under you know with certain uh definitions you know if you want to define what its markov blanket is 01:16:57 but uh but you know we are we are we are our cells are individuals our tissues liver say is an individual um a human is an individual a family is an 01:17:12 individual you know and it just keeps expanding outward from there the society is an individual so it really it's none of those are have you know any kind of inherent preference 01:17:24 levels there's no preference to any of those levels everything's an individual layered interacting overlapping individuals and it's just it's just a it's really just a the idea of an individual is just where 01:17:36 do you want to draw your circle and then you can you know then you can talk about an individual at whatever level you want so so that's all about information so it's all about processing information right

      The "individual" is therefore scale and dimension dependent. There are so many ways to define an individual depending on the scale you are looking at and your perspective.

      Information theory of individuality addresses this aspect.

    11. anticipations is key to 01:08:38 everything and attention is key to everything so every organism does that plants and everything else and it doesn't require a central nervous system 01:08:51 and and you i might add to this that not only is every organism cognitive but essentially every organism organism is cooperative to those cooperation and cognition 01:09:03 go hand in hand because any intelligent organism any organism that can act to better its you know viability is going to cooperate in 01:09:17 meaningful ways with other organisms and you know other species and things like that nice point because um there's cost to communication whether it's exactly whether it's the cost of making the pheromone 01:09:30 or just the time which is super finite or attention fundamentally and so costly interactions through time the game theory are either to exploit and stabilize which is fragile 01:09:42 or to succeed together yeah exactly and and and succeeding together cooperation is is is like everywhere once you once you understand what you're looking 01:09:54 for it's in the biologic world it's like everywhere so this idea that we're you know one one one person against all or you know we're a dog eat dog universe i mean it's you 01:10:08 know in a certain sense it's true obviously tigers eat you know whatever they eat zebras or whatever i mean that happens yes of course but in the larger picture 01:10:19 over and over multiple time scales not just uh you know in five minutes but over evolutionary time scales and uh you know developmental time scales and everything the cooperation is really the rule 01:10:33 for the most part and if you need if any listener needs proof of that just think of who you think of your body i mean there's about a trillion some trillion some cells 01:10:45 that are enormously harmonious like your blood pumps every day or you know this is a this is like a miracle i don't want to use the word miracle because i want to get into 01:10:59 whatever that might imply but uh it is amazing aw inspiring the the depth of cooperation just in our own bodies is like that's that's like 01:11:12 evolution must prefer cooperation or else there would never be such a complex uh pattern of cooperation as we see just in one human body 01:11:26 just to give one example from the bees so from a species i study it's almost like a sparring type of cooperation because when it was discovered that there were some workers with developed ovaries 01:11:38 there was a whole story about cheating and policing and about altruism and this equation says this and that equation says that and then when you take a step back it's like the colony having a distribution of over-reactivation 01:11:51 may be more ecologically resilient so um i as an evolutionary biologist never think well my interpretation of what would be lovey-dovey in this system must be how it works because that's so 01:12:05 clearly not true it's just to say that there are interesting dynamics within and between levels and in the long run cooperation and stable cooperation and like learning to adapt 01:12:17 to your niche is a winning strategy in a way that locking down just isn't but unfortunately under high um stress and 01:12:29 uh high uncertainty conditions simple strategies can become rife so that's sort of a failure mode of the population

      The human, or ANY multicellular animal or plant body is a prime example of cooperation....billions of cells in cooperation with each other to regulate the body system.

      The body of any multi-cellular organism, whether flora or fauna is an example of exquisite cellular and microbial cooperation. A multi-cellular organism is itself a superorganism in this sense. And social organisms then constitute an additional layer of superorganismic behavior.

    12. in you know the main theme of this first paper is i've tried to lay out a world view that is cognizant of that reflects some of the latest developments 01:03:14 in in science and in a variety of fields and sciences in those fields would be like complex system science cognitive science evolutionary biology 01:03:28 uh in a you know a few fields like that information theory and a few things like that i i've tried to to outline a a a world view that makes sense 01:03:42 from that leading edge of science and i would say too that that science has gone through really kind of a revolution you know there was like it's kind of like there's the pre-1950s 60s science and then there's 01:03:56 what we have today and there's enormous jumps enormous leaps in understanding that have happened just in the last say 50 years or so and and and some of those leaps the 01:04:09 ramifications are only now being you know the they're now being felt right the the the the some of the concepts are a a distinct shift 01:04:20 from where we were in the you know the pre-1960s or pre-1970s or so and um obviously we there's also you know we we see the changes in our 01:04:34 lifetime you know like we i was watching an old show on tv the other day and somebody put money in a pay phone you know like they put a quarter and a pay phone to make telephone call and it's like okay well that's that's that's that's history you know so 01:04:48 there's all these technologies that have that are that are that i grew up with that are not they don't even exist anymore they've been replaced by entirely new frameworks and that's the speed of those of the speed of that 01:05:00 evolution is is exponential so it's tremendous changes happening very quickly and the task of the program oh yeah which one put on that would be 01:05:12 um potentially because you're taking such a broad perspective with complex system science and evolutionary bio you might say that society has always been a cognitive architecture but if you had asked in 01:05:25 1500 is society a cognitive architecture be like well no i mean you have agriculture you have this you have this you have that whereas now if you tell people hey telecoms are they run through everything and the internet of 01:05:38 things the internet people you know like all this sort of stuff you tell people actually it's a multi-scale cognitive architecture humans are in the loop and our algorithms are never independent from us they're in feedback with us it's like 01:05:50 yeah that was what the mainstream was telling me so actually it's a total alignment point because it reflects how rapidly things are changing that it's just undeniably obvious that the 01:06:02 communication infrastructure is the system that we're engineering right right absolutely yeah yeah communications have mind-blowing changes and communications and and that brings mind-blowing changes and 01:06:14 outlook but but i want to emphasize a few points to this worldview that that you know it's not it's not just that everything is connected like you know you go like i know what a complex system is it just means everything is connected and we're 01:06:27 all kind of whole and blah blah okay fine but but even for people who are in that ilk you know who understand the the basic concept there there's ideas 01:06:40 that are that have come out in the last decade or so that that are there that are even pushing that boundary you know right and and i just want to highlight a few concepts here and i think active inference really is 01:06:52 playing a you know is is like a in a sense a culmination of some of some of these ideas or an embodiment of some of these ideas the main thing i want to say is that life is intelligent 01:07:06 and whole so it's not just that everything's connected it's that everything is intelligent everything is a lot you know life is an intelligent information processing 01:07:20 thing everything is is is adapting learning deciding whether we're talking about everything is cognitive 01:07:33 you know and cognition really implies information and information processing so whether we're talking about a slime mold or a human you know there's there's in in everything in plants in 01:07:46 bacteria in mold and anything that has any life at all that can be considered alive is intelligent and is learning and reacting not just reacting but learning 01:07:58 reacting and also deciding and acting and remembering and all those things and you might ask well you know that's impossible bacteria doesn't have a brain you know it can't be it can't be cognitive but it is 01:08:14 cognitive but we just have to relax what we how we define cognition you know and when on the slide i have a little thing there that every organism 01:08:25 is cognitive in the sense that it displays capacities typically associated with human cognition such as sensing learning problem solving memory storage and recall

      First paper deals with worldview, and focuses on extending the concept of cognition to all forms of life, not just humans and other traditional higher forms of life that have this traditional attribute of cognition.

    13. the six 00:48:41 six big systems i've mentioned can be viewed as a cognitive architecture it's the it's the means by which the society learns decides adapts and 00:48:54 and this society's efforts this is the third underlying position the society's efforts to learn decide and adapt and be viewed as being driven by an intrinsic purpose and that's really key also 00:49:08 because it's not just that we're learning deciding and adapting willy-nilly i mean i mean maybe it seems that way in the world you know in the sense we're so dysfunctional it kind of is billy nilly but 00:49:20 but what really matters is that we learn decide and adapt in relation to whatever intrinsic purpose we actually have as as a society as individuals in a 00:49:34 society it's that it's it's it's it's as i will use the the term uh maybe several times today it's solving problems that matter that really that really 00:49:45 matter that's what we're after

      Second Proposition: The six thrusts or prmary societal systems are the cognitive architecture of the superorganism which it uses to sense the world

    14. first is that uh a society of any scale and and i don't mean society is in bill millions or billions of people i mean society as in a thousand people you know like a sub 00:47:23 sub city a community that is not even a whole city just a a group of like-minded people uh who are willing to give this a give this you know 00:47:35 a field trial ago a society of any scale can be viewed as a super organism so that's kind of fundamental everything really really works from there we are together we 00:47:49 are not just individuals connected we are a whole society is a whole and it's a and it's a whole with the environment and it's wider you know 00:48:03 sphere so as we'll talk about today you know this even the idea of an individual is it's okay to talk about individuals it's fine but it's kind of like an arbitrary thing an 00:48:15 individual could be an individual cell or an individual person or an individual uh species or an individual ecosystem but it's all with all deeply embedded and enmeshed 00:48:28 entwined with the whole so uh uh a society can be viewed as a super organism

      First Proposition: Society (at every scale, and even the community scale) can be seen as a superorganism and the individual and society are entangled. This is analogous to the SRG adoption of the human INTERbeing concept, treating the individual as a gestalt of both individual and enmeshed cell of a larger social organ.

      In fact, the human organism can be seen from three different perspectives and levels of being:

      1. an aggregation of billions of cells and trillions of microbes, wherein consciousness can be regarded as an emergent property of a complex system of a population of microorganisms
      2. the 4E (Embodied, Enacted, Embedded, Extended) lived experience of consciousness
      3. as a cell in a larger social superorganism (SSO).
    1. If it’s too much or too little, people stop working on the problem if they can.

      I find this concept to be interesting because it seems as if there is a Goldilocks zone of sorts where learning is able to thrive. The learning has to be just hard enough to capture the brain's attention, but not too hard that the brain gives up.

    1. We’re all in the middle of a recession, like we’re all going to start buyingexpensive organic food and running to the green market. There’s somethingvery Khmer Rouge about Alice Waters that has become unrealistic. I’mnot crazy about [America’s] obsession with corn or ethanol . . . but I’m[uncomfortable] with legislating good eating habits.45

      I feel like this is really connected to my first annotation on this essay, your plastic bags are not what is killing our planet. As we "legislate good eating habits." as it is put, are we further disenfranchising the people from their food? "Better" food is head and shoulders more expensive, you go to the grocery store and they separate the organic produce out into it's own sections, sections I never shop in because of the price difference. I am all for making sustainable and healthy food a more mainstream option but the way we currently go about it just further widens the gap between those who can afford "good" food and those who can't.

    1. “It’s no use, Your Honor,” K. continued, “even your little notebook confirms what I’m saying.” Pleased that his own calm words alone were to be heard in that strange assembly, K. even dared to snatch the notebook from the magistrate’s hands and lift it in his fingertips by a single center page, as if he were repelled by it, so that the foxed and spotted leaves filled with closely spaced script hung down on both sides. “These are the records of the examining magistrate,” he said, letting the notebook drop to the table. “Just keep reading through them, Your Honor, I really have nothing to fear from this account book, although it’s closed to me, since I can barely stand to touch it with the tips of two fingers.” It could only be a sign of deep humiliation, or at least so it seemed, that the examining magistrate took the notebook from where it had fallen on the table, tried to put it to rights somewhat, and lifted it to read again.

      This entire paragraph confuses me on how to court proceedings work, how is K allowed to interrupt and mock the magistrate? Are there not guards and other people there to keep him in check? Why is he allowed to call the magistrate a liar and practically tell him he is wrong?

    1. 6.8 Psychoactiv

      While it's good this chapter links to another chapter, it is not in itself very helpful. Minimal detail is given. Perhaps include a bit more information for this section in addition to the links- it can be frustrating to be assigned a chapter that basically just says to read another chapter.

    1. Reviewer #2 (Public Review):

      The authors wish to investigate how various allocentric representations, such as those observed in the brain's navigational system, can emerge from the interaction between action and sensory inputs. They use a predictive architecture, in which visual inputs are predicted from actions, to explain the emergence of multiple allocentric representations (HD cells, place cells, boundary vector cells). The major strength of the paper is the demonstration of the network's ability to develop spatial representations of multiple virtual environments and the demonstration that such representations can be used as a foundation to quickly represent new environments and to support further reinforcement learning tasks. However, the analysis is not yet sufficient to support a number of claims made in the paper about critical pieces of the findings. Further, two critical aspects of the model, namely the correction step, and the RNN-3 memory store, are not adequately described, rely on decisions that are not adequately justified, and their properties/significance are not adequately investigated. Thus, while the authors did demonstrate the emergence of spatial representation and the utility of their model, their presentation did not adequately support their conclusions. With significant revisions to the text and additional experiments/analysis, this work will have a significant impact on the field, and their model will be of further use to the community.

      My major concern is that two critical aspects of the model, namely the correction step, and the RNN-3 Memory store, are not adequately described, rely on decisions that are not adequately justified, and their properties/significance are not adequately investigated, as discussed below.

      Correction step

      - In the results, the correction step is minimally described. However, the method is fairly involved. For example, lines 81-82 state that "visual information being communicated only by the activation of slots in the memory stores (Fig 1B)". Similar descriptions are given in lines 102-103 and 125-126. However, the nature of these predictions is not stated in the results or well-diagrammed in Figure 1B. It might help to specify, for example in the figure legend, that further details about this step are provided in Supplementary figure 1. As this is a crucial piece of the model, I recommend that at least a few more sentences be given to this step in the results, which outlines the high-level details of the correction step.

      - In the methods, the description of the correction step is inadequate, it's given simply as G(x,x). While this may be appropriate for a machine learning conference proceeding, it's not appropriate for a general journal. The authors should include equations that specify G (as well as F), which could be included in the section "Sigmoid-LSTM and Sigmoid-Vanilla". Further, the authors might want to justify the need for an entirely new RNN cell, rather than another input to the existing RNN. In lines 318-319: "each x~ can be thought of as the result of a weighted reactivation of the RNN memory embeddings by the current visual input." It might be useful to explain the correction code as: "the expected RNN activation given the current visual input's activation of the memory cells".

      - Lines 125-126 state that: "RNN-3 received no self-motion inputs, thus being dependent on temporal coherence, and corrections from mispredictions as its sole input". It's unclear why the corrections to this RNN are generated from "mispredictions", and not just visual "corrections", like in the other RNNs. Further, nothing in the implementation of the correction step enforces that it gives "corrections", only that it learns to incorporate information from the current visual input, via the memory store, to the action of the RNNs. They're just occasional information that the network learns to use to update the RNN state as best as possible. While this is presented as a correction, it's unclear what this RNN actually does. Does it learn to simply replace the existing x with what it should be from the memory store (i.e. a correction)? Or does it combine information from x^hat and x^tilde in some complicated way? To understand this, I recommend the authors could compare x^, x~, and x. During the correction step.

      - Finally, the authors state that (Line numbers missing), "to correct for the accumulation of integration errors, the RNNs must incorporate positional and directional information from upstream visual inputs as well. This correction step should not be performed at every time step, or the integration of velocities would be unnecessary; in our experiments, it was performed at random timesteps with probability Pcorrection = 0.1." This entails a claim that for Pcorrection=0, errors will accumulate, while for Pcorrection=1, the integration of velocities will be "unnecessary". While this makes intuitive sense, no empirical justification for these claims is shown, and their implications for the model's function and representation are not demonstrated. I would suggest that the authors compare a range of Pcorrection values, for example, p=[1, 0.3, 0.1, 0.03, 0.01], and demonstrate how the network performance and spatial representation vary as a function of Pcorrection. Finally, though less important, it's unclear why this correction is probabilistic. This decision could be justified, e.g. with an experiment comparing the results of probabilistic versus deterministic/periodic corrections.

      RNN-3 and Memory store<br /> This seems like a key feature of the model, yet its implementation gets very little attention in the results, and the description is conflicting and difficult to understand.

      - Line 142 states that "the allocentric representations of RNN-3 were stored in the external memory slots as a second set of targets - being reactivated at each time step by comparison to the current state of the RNN-3". However, it's unclear what's meant by a "second" set of targets, or why this is unique to RNN-3. From the text, it seems that this could either refer to m(x)_3 (the memory map corresponding to RNN3), or s (the slots). However, from my interpretation of the methods as written, the m(x) parameters are learned, and s are activated by the joint activity of all three RNNs, not just RNN-3 (Equation 4). Why is this written as if it's a separate group of slots unique to RNN-3?

      - Further, how is the activity of memory slots assessed? While I can imagine (though not found in the method), how the tuning curves of RNN-1-3 are calculated, because of the confusion with what this set of targets refers to I don't know how e.g. Figure 2E was calculated. I recommend this be included in the methods. Importantly, I recommend the authors expand the description of RNN-3 and its associated memory store in the results, and clarify its description in the methods section.

      - Lines 320 and 322 states that the memory store contents corresponding to the RNNs m(x) are optimized parameters, while those corresponding to upstream inputs m(y) are not. However, Line 325 states that all contents are chosen and assigned (m(y), m(x)) := (y, x).

      - Finally, no justification was given as to why RNN-3 was added. The authors justify the addition of RNN-2 by stating that "a single RNN receiving all the velocity inputs did not develop the whole range of representations" (Line 101). However, no justification is given for a third RNN that receives no input. As this is a key piece of the results, justifying and understanding its contribution is critical. Does this affect predictive performance, the ability to generalize to new environments, or utility for RL, or is it simply adding a representational similarity to hippocampal place fields and egoBVCs? I recommend that the authors show the results of a network with only RNN-1 and RNN-2, to justify the addition of RNN-3 and demonstrate its utility for prediction.

      On the head direction attractor analysis<br /> - Lines 174-176 state, "To investigate how our model incorporates visual information in its representation of heading, we simulated the input of visual corrections (512 images from the training environment), However, this experiment does not tell you "how the model incorporates visual information", but only the response to selected images. The intuitive idea is that the network learns to map distal cues to specific angles, but not ambiguous images. To test this hypothesis, I would recommend that the authors compare the heading direction of the visual correction input to the direction on the attractor activated, i.e. to show that images that give an attractor point match the heading of that image. Further, because the corrections are given through an entirely different RNN cell (G), from that which (presumably) holds the attractor (F), I would recommend that the authors show how the correction input to G interacts with an existing action-driven point on the attractor via F. For example, what if an image is shown that disagrees with the current heading direction?

      On the RL agent<br /> - Lines 200-202 state that "self-consistency is an adaptive characteristic allowing spatial behaviour learned in one environment to be quickly transferred to novel environments", and Line 223: "the spatial responses present in the SMP's RNN support rapid generalization to novel settings." While they've shown that the SMP can support RL and generalization, they haven't tested whether its spatial tuning is responsible for the performance. One way they could test this is to replace the SMP input to the RL agent with equivalent rate tuned units as inputs (whose rate is simply what would be expected from the tuning curve of each RNN unit). This experiment could be done for the pre-trained agent (to see if performance is maintained from the tuning curves alone, or if there's more information in the SMP that's being used), and possibly compared to a newly-trained agent.

    1. Reviewer #1 (Public Review):

      McLachlan and colleagues find surprisingly widespread transcriptional changes occurring in C. elegans neurons when worms are prevented from smelling food for 3 hours. Focusing most of the paper on the transcription of a single olfactory receptor, the authors demonstrate many molecular pathways across a variety of neurons that can cause many-fold changes in this receptor. There is some evidence that the levels of this single receptor can adjust behavior. I believe that the wealth of mostly very convincing data in this paper will be of interest to researchers who think about sensory habituation, but I think the authors' framing of the paper in terms of hunger is misleading.

      There is a lot to like about this paper, but I just cannot get over how off the framing is. Unless I am severely misunderstanding, the paper is about sensory habituation, but the word habituation is not used in the paper. Instead, we hear very often about hunger (6x), state (92x), and sensorimotor things (23x). This makes little sense to me. The worms are "fasted" (111x) for 3 hours, but most of the expression changes are reversed if the worms can smell, but not eat, the food. And I've heard about the fasted state, noting that worms don't eat more food after this type of "fasting". So what is with all of this hunger/state discussion?

      And the discussion of internal states is often naïve. In the second paragraph of the introduction, we are told that "Recent work has identified specific cell populations that can induce internal states", beginning with AgRP neurons, which have been known to control the hunger state in mammals for nearly 40 years |||(Clark J. T., Kalra P. S., Crowley W. R., Kalra S. P. (1984). Neuropeptide Y and human pancreatic polypeptide stimulate feeding behavior in rats. Endocrinology 115 427-429. Hahn T. M., Breininger J. F., Baskin D. G., Schwartz M. W. (1998). Coexpression of Agrp and NPY in fasting-activated hypothalamic neurons. Nat. Neurosci. 1 271-272). Instead, the authors cite three papers from 2015, whose major contribution was to show that AgRP activity surprisingly decreases when animals encounter food. These papers absolutely did not identify AgRP neurons as inducing internal states or driving behavioral changes typical of hunger (Aponte, Y., Atasoy, D., and Sternson, S. M. (2011). AGRP neurons are sufficient to orchestrate feeding behavior rapidly and without training. Nat. Neurosci. 14, 351-355. doi: 10.1038/nn.2739; Krashes, M. J., Koda, S., Ye, C., Rogan, S. C., Adams, A. C., Cusher, D. S., et al. (2011). Rapid, reversible activation of AgRP neurons drives feeding behavior in mice. J. Clin. Invest. 121, 1424-1428. Doi: 10.1172/jci46229). Nor did Will Allen's work in Karl Deisseroth's lab discover neurons that drive thirst behaviors. Later in the same paragraph, we hear that: "However, animals can exhibit more than one state at a time, like hunger, stress, or aggression. Therefore, the sensorimotor pathways that implement specific motivated behaviors, such as approach or avoidance of a sensory cue, must integrate information about multiple states to adaptively control behavior." This is undoubtedly true, but it's not clear what it has to do with any of the data in this paper - I don't even think this is really about hunger, much less the interaction between hunger and other drives.

      To summarize: I think the authors could give the writing of the paper a serious rethink. I want to stay far away from telling people how to write their papers, so if the authors insist on framing this obviously sensory paper as being about hunger and sensorimotor circuitry I think they should at least explain to their readers why they are doing that in light of the evidence against it (and I think they should state clearly that worms don't actually eat more in this fasted state).

      I was also surprised by how unsurprised the authors seemed by the incredibly widespread changes they observed after 3 hours away from food. Over 1400 genes change at least 4-fold? That seems like a lot to me. But the authors, maybe for narrative reasons, only comment on how many of them are GPCRs (16.5%, which isn't that much of an overrepresentation compared to 8.5% in the whole genome). For me, these widespread and strong changes are much of the takeaway from this paper. But it does make you wonder how important the activity of one particular GPCR (selected more or less randomly) could be to the changes the worm undergoes when it can't smell food.

      str-44 is very convincingly upregulated when worms can't smell food, but it's clear from the data that this upregulation has very little to do with the actual lack of eating, and more with the lack of being able to sense bacteria for 3 hours. In Figure 1E, when worms are fasted, but in the presence of bacteria, receptor levels are largely unchanged (there are 5 outliers, out of ~50 samples). Since receptor expression doesn't change in this case even though the worms are in the fasted state, it cannot be "state-dependent" - unless the state is not having smelled food for the last 3 hours. And, in my opinion, that would divorce the word "state" from its ordinary meaning.

      The authors argue that str-44 expression modulates food-seeking behavior in fasted worms by causing them to preferentially seek out butyl and propyl acetate. However, the behavioral data to back this up has me a little worried. For example, take Figures 2F and 2G. They are the exact same experiment: comparing how many worms choose 1:10,000 butyl acetate compared to ethanol when the worms are either fasted or fed. In the first experiment (2F), ~70% chose butyl acetate for fasted worms and ~60% for fed worms. But in the replicate, ~60% choose butyl acetate for fasted worms and ~50% for fed worms. A 10% variability in baseline behavior is fine (but not what I would call a huge state change), but when the difference between conditions is the same size as baseline variability I start to disbelieve. Can the authors explain this variability? Or am I misunderstanding?

      And I'll say it just one last time, I think the authors are overselling their results...or at least the str-44 and AWA results (they are dramatically underselling the results that show the widespread changes in the expression level of 10% of the genome in response to not smelling food for 3 hours):

      "Our results reveal how diverse external and internal cues... converge at a single node in the C. elegans nervous system to allow for an adaptive sensorimotor response that reflects a complete integration of the animal's states."

      This implies that str-44 expression AWA is the determinant of whether a worm will act fasted or fed. I have already expressed why I don't believe this is the case (inedible bacteria experiment, Figure 1E), but just because things like osmotic stress suppress the upregulation of str-44, that doesn't mean that it is the site of convergence. It could be any of the other 1400 genes that changed 4+ fold with bacterial deprivation. And even in terms of the actual AWA neuron, it was chosen because it showed modest upregulation of chemoreceptors (1.8 fold compared to ~1.5 fold in ASE and ASG), even though chemoreceptors were highly upregulated in other neurons as well.

      Overall, and despite my critiques (and possibly tone), I really like this paper and think there really is a lot of interesting data in there.

    1. I define micro-ignorance as individual acts of ignoring, and macro-ignorance as the sedimentation of micro-ignoring into structural blind-spots that obscure uncomfortable truths from being more widely known. So, Blair shutting down the BAE Systems investigation is an example of micro-ignorance. But it’s not an isolated act – it has epistemological ripple effects. The halting of the investigation compounds the pretence that no corruption took place, and this contributes to a persistent and quite insidious form of macro-ignorance, which is the debatable belief that western governments are less corrupt than developing countries. Of course they’re just as corrupt, they’re just better at hiding it.

      It's a way of gaming the legal system as well.

      Dictators and authoritarian leaders have used it since the beginning of organized legal system.right up to today. If it doesn't appear in the courts, then those who know they are guilty can claim moral authority because they have gamed the legal system.

    2. What is the "ostrich instruction"? It’s an informal way to refer to an important legal principle, which is that deliberate avoidance of knowledge about one’s own illegal actions should not be a valid defence in law. It’s one of the reasons why, just over 15 years ago, American executives at the energy company Enron were found guilty even though their direct role in malfeasance at Enron wasn’t clear. They should and could have known about fraud, and they were found guilty. Enron has been pointed to as proof that wilful ignorance does not pay. But in my book I stress that that Enron is a fairly isolated case. As a number of other people have pointed out, from journalists Matt Taibbi and Jesse Eisenger to legal scholar Rena Steinzor, US authorities have had an abysmal record of successful prosecution of white-collar crime in recent decades. An American legal scholar based in the UK, Alexander Sarch, is doing important work on this topic. But he is an exception, and I’m a little critical of mainstream legal scholarship in my book. I suggest that legal scholars themselves have been acting ostrich-like about recent shifts in the law. For example, the old adage "ignorance of the law is no excuse" is increasingly waived in US courts when it comes to complex financial and tax crimes. This shift is one of the reasons why I argue that for many powerful and rich people, ignorance really is legal bliss.

      This is a salient point, and a leverage point.

    1. Pressley: How many extremist murders has the FBI linked to Black Lives Matter or similar black activist groups? McGarrity: We don’t work Black Lives Matter it’s a movement. It’s an ideology. We don’t work that. Pressley: So the answer is none. Can you just say that for the record? There has been no killing that the FBI can link to black Lives Matter or similar black activist groups, to your knowledge.McGarrity: To my knowledge—I’d have to go back—but to my knowledge, right now, no

      This angered me though it didn't surprise me.

    1. reply to: https://ariadne.space/2022/07/01/a-silo-can-never-provide-digital-autonomy-to-its-users/

      Matt Ridley indicates in The Rational Optimist that markets for goods and services "work so well that it is hard to design them so they fail to deliver efficiency and innovation" while assets markets are nearly doomed to failure and require close and careful regulation.

      If we view the social media landscape from this perspective, an IndieWeb world in which people are purchasing services like easy import/export of their data; the ability to move their domain name and URL permalinks from one web host to another; and CMS (content management system) services/platforms/functionalities, represents the successful market mode for our personal data and online identities. Here competition for these sorts of services will not only improve the landscape, but generally increased competition will tend to drive the costs to consumers down. The internet landscape is developed and sophisticated enough and broadly based on shared standards that this mode of service market should easily be able to not only thrive, but innovate.

      At the other end of the spectrum, if our data are viewed as assets in an asset market between Facebook, Instagram, Twitter, LinkedIn, et al., it is easy to see that the market has already failed so miserably that one cannot even easily move ones' assets from one silo to another. Social media services don't compete to export or import data because the goal is to trap you and your data and attention there, otherwise they lose. The market corporate social media is really operating in is one for eyeballs and attention to sell advertising, so one will notice a very health, thriving, and innovating market for advertisers. Social media users will easily notice that there is absolutely no regulation in the service portion of the space at all. This only allows the system to continue failing to provide improved or even innovative service to people on their "service". The only real competition in the corporate silo social media space is for eyeballs and participation because the people and their attention are the real product.

      As a result, new players whose goal is to improve the health of the social media space, like the recent entrant Cohost, are far better off creating a standards based service that allows users to register their own domain names and provide a content management service that has easy import and export of their data. This will play into the services market mode which improves outcomes for people. Aligning in any other competition mode that silos off these functions will force them into competition with the existing corporate social services and we already know where those roads lead.

      Those looking for ethical and healthy models of this sort of social media service might look at Manton Reece's micro.blog platform which provides a wide variety of these sorts of data services including data export and taking your domain name with you. If you're unhappy with his service, then it's relatively easy to export your data and move it to another host using WordPress or some other CMS. On the flip side, if you're unhappy with your host and CMS, then it's also easy to move over to micro.blog and continue along just as you had before. Best of all, micro.blog is offering lots of the newest and most innovative web standards including webmention notificatons which enable website-to-website conversations, micropub, and even portions of microsub not to mention some great customer service.

      I like to analogize the internet and social media to competition in the telecom/cellular phone space In America, you have a phone number (domain name) and can then have your choice of service provider (hosting), and a choice of telephone (CMS). Somehow instead of adopting a social media common carrier model, we have trapped ourselves inside of a model that doesn't provide the users any sort of real service or options. It's easy to imagine what it would be like to need your own AT&T account to talk to family on AT&T and a separate T-Mobile account to talk to your friends on T-Mobile because that's exactly what you're doing with social media despite the fact that you're all still using the same internet. Part of the draw was that services like Facebook appeared to be "free" and it's only years later that we're seeing the all too real costs emerge.

      This sort of competition and service provision also goes down to subsidiary layers of the ecosystem. Take for example the idea of writing interface and text editing. There are (paid) services like iA Writer, Ulysses, and Typora which people use to compose their writing. Many people use these specifically for writing blog posts. Companies can charge for these products because of their beauty, simplicity, and excellent user interfaces. Some of them either do or could support the micropub and IndieAuth web standards which allow their users the ability to log into their websites and directly post their saved content from the editor directly to their website. Sure there are also a dozen or so other free micropub clients that also allow this, but why not have and allow competition for beauty and ease of use? Let's say you like WordPress enough, but aren't a fan of the Gutenberg editor. Should you need to change to Drupal or some unfamiliar static site generator to exchange a better composing experience for a dramatically different and unfamiliar back end experience? No, you could simply change your editor client and continue on without missing a beat. Of course the opposite also applies—WordPress could split out Gutenberg as a standalone (possibly paid) micropub client and users could then easily use it to post to Drupal, micro.blog, or other CMSs that support the micropub spec, and many already do.

      Social media should be a service to and for people all the way down to its core. The more companies there are that provide these sorts of services means more competition which will also tend to lure people away from silos where they're trapped for lack of options. Further, if your friends are on services that interoperate and can cross communicate with standards like Webmention from site to site, you no longer need to be on Facebook because "that's where your friends and family all are."

      I have no doubt that we can all get to a healthier place online, but it's going to take companies and startups like Cohost to make better choices in how they frame their business models. Co-ops and non-profits can help here too. I can easily see a co-op adding webmention to their Mastodon site to allow users to see and moderate their own interactions instead of forcing local or global timelines on their constituencies. Perhaps Garon didn't think Webmention was a fit for Mastodon, but this doesn't mean that others couldn't support it. I personally think that Darius Kazemi's Hometown fork of Mastodon which allows "local only" posting a fabulous little innovation while still allowing interaction with a wider readership, including me who reads him in a microsub enabled social reader. Perhaps someone forks Mastodon to use as a social feed reader, but builds in micropub so that instead of posting the reply to a Mastodon account, it's posted to one's IndieWeb capable website which sends a webmention notification to the original post? Opening up competition this way makes lots of new avenues for every day social tools.

      Continuing the same old siloing of our data and online connections is not the way forward. We'll see who stands by their ethics and morals by serving people's interests and not the advertising industry.

    1. these people do believe that they can repeal the laws of physics they can just write a law 00:36:15 stroke the pen law of the land kind of cool and repeal physics this is why a recent article i had to use the term the tyranny of physics right like sorry guys it you can't you 00:36:28 know there's there's no getting around physics um i wish there were maybe there are nuances to physics that we don't that we're yet to discover that will allow us to break certain ideas and certain rules but we ain't got there yet 00:36:41 and it you know whiskey costs money and oil costs money to get out of the ground and you the the idea that they can put any kind of global price control or even regional price control on the 00:36:54 price of oil is ludicrous it's a big world out there and their enforcement arm isn't anywhere close to as developed as they think it is this is this is these are ideas promulgated by people who literally think that if they 00:37:07 speak magic words that the entire world all seven billion of us will listen this is the inherent this is the takeaway that you should this is what you should take away from 00:37:21 those statements these people believe that they they're like marduk invoking jordan peterson he speaks magic words and then reality is created yeah

      repeal physics

      marduk and the world is created

    1. Seokjin doesn’t ask how Namjoon got his number, and he doesn’t ask why he’s never texted before when they’ve been friends since they were five. He doesn’t ask if they could even be considered friends when they haven’t spoken in several months. He doesn’t ask if that was his fault, or if it was Namjoon’s.They talk everyday after that. Seokjin finds it a little weird but he doesn’t say it is to Namjoon. He responds to Namjoon and keeps up with the conversation without sounding bored. He isn’t bored. But he’s scared that he comes across like he is.It’s not that Seokjin doesn’t want to text with Namjoon – it’s just that it’s a little weird and Seokjin’s never really texted out entire conversations with his guy friends before because most of their conversations were had in group chats. This felt different. This felt like they were whispering to each other, like no one was meant to know what they’re talking about, or that they were talking at all.

      THIS I SO S FUCFKGNIDFNFN RELATBALEWKNFDBNB GOD OGD OGDO GOD GOD GOD GOD GOD GOD GOD GOD GOD GOD NO LIKE THE THING ABOUT CHILDHOOD FRIENDSHIPS. your friendship never relied on texts bc you were both so young and all of your conversations happen in person and usually in groups. growing older, texting them feels so wrong and intimate and scary especially after a really long time and FUCK

      also would like to add how Seokjin doesnt think theyre friends bc they havent spoken in months and namjoon acts like everything is normal between them. goodbyeeeeeeeeee me when object permanence

    2. Seokjin nods slowly. “Sure. Tell me what you want when you’ve decided on something,” he says, and he holds a hand out to Namjoon. Namjoon looks down at it like it’s something foreign for just long enough that Seokjin starts to feel awkward and wants to bring his hand back to himself before he finally reaches out and takes Seokjin’s hand. His grip is light. They shake hands.

      foreshadowingnfndfnm

    1. In “Big? Smart? Clean? Messy? Data in the Humanities”, Schöch suggests that metadata “describes aspects of a dataset” and gives a list of examples into what can be considered aspects. Categories like “the time of its creation”, “the way it was collected”, and “what entity external to the dataset it is supposed to represent” are translated into “creation_date”, “medium”, and “description” in the file.

      From all the readings this week, I am getting to the conclusion that metadata is just as good as the main data it is meant to describe. This is because metadata itself can be used as data on its own. For example, if the metadata for a collection of artwork is presented, and I were to sort the artworks by genre and then compare the dates of creation in order to find trends, the metadata would be the "main data" being used. This is to say that when it comes to using data, one has to think outside of the box/ be creative in how they use it and what parts of it they use. It's like molding clay, many possibilities.

    1. "yin" hacking accepts the aspects that are beyond rational control and comprehension. Rationality gets supported by intuition. The relationship with the system is more bidirectional, emphasizing experimentation and observation. The "personality" that stems from system-specific peculiarities gets more attention than the measurable specs.

      This reminds me of my preferred approach to UI development. I can't get behind using design tools to completely architect solutions - I'd rather write the HTML myself and explore different elements, then pick the best ideas that emerge and continue with them! In this way embracing the idiosyncrasies of the web platform is a wonderful experience.

      However, I'm not sure how we tolerate system specific peculiarities in other ways - it just doesn't make sense to rewrite every program for every device everywhere, which is why we have abstractions and standards. Business logic should always work the way it's designed!

  5. Local file Local file
    1. 'I don't think it's anything—I mean, I don't think it was ever put to anyuse. That's what I like about it. It's a little chunk of history that they'veforgotten to alter. It's a message from a hundred years ago, if one knew howto read it.'

      Walter and Julia are examining a glass paperweight in George Orwell's 1984 without having context of what it is or for what it was used.

      This is the same sort of context collapse caused by distance in time and memory that archaeologists face when examining found objects.

      How does one pull out the meaning from such distant objects in an exegetical way? How can we more reliably rebuild or recreate lost contexts?

      Link to: - Stonehenge is a mnemonic device - mnemonic devices in archaeological contexts (Neolithic carved stone balls


      Some forms of orality-based methods and practices can be viewed as a method of "reading" physical objects.


      Ideograms are an evolution on the spectrum from orality to literacy.


      It seems odd to be pulling these sorts of insight out my prior experiences and reading while reading something so wholly "other". But isn't this just what "myths" in oral cultures actually accomplish? We link particular ideas to pieces of story, song, art, and dance so that they may be remembered. In this case Orwell's glass paperweight has now become a sort of "talking rock" for me. Certainly it isn't done in any sort of sense that Orwell would have expected, presumed, or even intended.

    1. Note: This rebuttal was posted by the corresponding author to Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      Response to the reviewers

      Manuscript number: RC-2022-01407

      Corresponding author(s): Ivana, Nikić-Spiegel

      1. General Statements

      We would like to thank the reviewers for careful reading of our manuscript and for their insightful and useful comments. We are happy to see that the reviewers find these results to be of interest and significance. The way we understand reviewers’ reports, their main concerns can be roughly divided in following categories: 1) providing more quantitative data 2) interpretation of the Annexin V/PI assay 3) additional evidence for calpain involvement. We intend to address these experimentally or by modifying the text, as outlined below.

      2. Description of the planned revisions

      Reviewer #1

      Fig1A/B o SYTO 16 staining suggests slight reshaping of nucleus upon spermine NONOate, showing less blurry punctae. From the SYTO 16 profile, this should be quantifiable.

      By looking at the shown examples and the entire dataset, it appears to us as if neuronal nuclei are shrinking upon spermine NONOate treatment resulting in their less blurry appearance. We are not sure if this is what the reviewer is referring to, but this can also be quantified by measuring changes in neuronal nuclear size. We already have this data from the measurements shown in Fig4 and we intend to show it in the revised version of the manuscript. Line profile measurements are also possible, but the nuclear size quantification might be more suitable for this purpose.

      o There is a subset of neuron nuclei that are SYTO 16 positive. Please quantify the ratio

      We will use our existing dataset to quantify the ratio of NFL positive and SYTO16 positive nuclei.

      FigS1A o Show NeuN with Anti-NFL merged figures

      We will show merged NeuN and anti-NFL images, which might require rearrangement of the existing figures and figure panels. We will do this in the revised manuscript.

      FigS1C o Show quantification and timeline. I want to know whether there is also a plateau reached here.

      As the data shown in the FigS1C do not include NeuN staining, we will do additional experiments and perform proposed quantifications.

      FigS2A-F o Though the statements might be true, selecting one nucleus for a line profile as a statement for the whole dataset seems problematic. Average a larger number of unbiased selected nuclei profiles across multiple cultures to make a stronger statement, or a percentage of positive nuclei as in FigS1b.

      Corresponding images and line profiles are representative of the entire dataset. However, we agree with the reviewer that this is not obvious from the current manuscript version. Thus, to strengthen our findings, we intend to quantify the percentage of positive nuclei as in FigS1b. The only difference will be that instead of NeuN, we will use SYTO16 as a nuclear marker. The reason being that the existing datasets contain images of NFL and SYTO16 and not NeuN.

      FigS3 • There are no fluorescence profiles, no quantification

      As the reviewer suggests, we will quantify the ratio of NFL positive and SYTO16 positive nuclei, and include the quantifications in the revised manuscript.

      General statement: There do seem to be punctated patterns of non-nucleus accumulating NFL fragments. Can they be localized to any specific structure?

      We assume that the reviewer is referring to neuronal/axonal debris. They are present after injury but they do not colocalize with nuclear stains. We will address this in the revised manuscript.

      Fig1C-F • I find it too simplistic to categorize c+f and d+e together. There is a huge difference in the examples of nuclear localization between d and e. To not comment on their distinction (if that is consistent) is problematic. Also, since we don't see a merge with either NeuN or SYTO 16, reader quantification is difficult.

      We thank the reviewer for bringing this up. We will carefully check our entire dataset and we will update the figures and the text accordingly. We will also show the corresponding SYTO16 images, as the reviewer suggested.

      Would the microfluidic device construction allow for time to transport any axonally damaged fragments to the soma?

      Yes, the construction of the microfluidic devices allows the transport of axonal proteins back to the soma. Based on our experiments, it seems that damaged NFL from the axonal compartment could be contributing to the accumulation of NFL fragments in the nuclei. However, this contribution seems to be minimal as we cannot detect nuclear NFL upon the injury of axons alone. Alternatively, it could be that the processing of axonal NFL fragments proceeds differently if neuronal bodies are not injured and that this is the reason we don’t detect the NFL nuclear accumulation upon injury of axons alone. We will discuss this in the revised manuscript.

      Fig2C+D • The statement ".... no annexin V was detected on the cell membrane" needs to be shown more clearly

      We will modify figures to address this comment.

      • Please provide merged AnnexinV/PI images

      We will modify figures to address this comment.

      • The conclusion about 2D, that nuclear accumulated NFL overlaps with PI is not supported by the example image shown. There are plenty of PI positive spots that are not NFL positive and even several NFL positive ones that do not have a clear PI staining. Please quantify and then show a very clear result in order to be able to suggest necrosis as the underlying process.

      We are not sure if we understand the reviewer’s concern correctly. We will try to clarify it here and in the revised text. If necessary, we will tone down our conclusion, but the reason why not all of PI positive spots are NFL positive is most likely due to the fact that not all injured nuclei are NFL positive. We quantified in FigS1 that up to 60% of nuclei under injury conditions show NFL accumulations. That is why we are not surprised to see some PI positive/NFL negative nuclei. And the fact that there are some NFL positive nuclei which appear to be PI negative is most likely related to the fact that the PI binding is affected. In addition, upon closer inspection of NFL and PI panels in Fig2d it can be observed that NFL positive nuclei are also PI positive, albeit with a lower PI fluorescence intensity. We will modify the figure to show this clearly in the revised manuscript.

      FigS5 C+D • If the case is made that nitric oxide damage induces necrosis, then why is it that the AnnexinV example of Staurosporine exposure (which induces apoptosis) looks similar to that of nitric oxide damage in Fig2d and necrosis induction with Saponin looks very different?

      We thank the reviewer for bringing this up. We will try to clarify this in the revised manuscript. Regarding the specific questions, the most likely explanation why staurosporine treated neurons look similar to the ones treated with spermine NONOate is that in the late stages of apoptosis cell membrane ruptures and allows for the PI to label nuclei. This is probably the case here as illustrated by the nucleus in the middle of the image (FigS5c) that shows the fragmentation characteristic for the apoptosis. This is not happening in early apoptotic cells due to the presence of an intact plasma membrane. On the other hand, the reason why saponin treated cultures look different compared to spermine NONOate is that membranes are destroyed by saponin so that the PI can enter the cell. For that reason, there could have not been any AnnexinV binding to the membrane which would correspond to the AnnexinV signal of spermine NONOate treated neurons. As we will discuss below, we did not try to mimic spermine NONOate-induced injury with saponin treatment. Instead this was a control condition for PI labeling and imaging. We also used a rather high concentration of saponin which probably destroyed all the membranes which was not the case with spermine NONOate treatment. We intend to do additional control experiments to address this.

      • Additionally, does necrosis induction with Saponin also cause NFL fragment accumulation in the nucleus? Please show a co-staining of them. Also, the authors want to make a claim about reduce PI binding in NFL accumulated necrotic cells. In these examples, the intensity of the nuclear stain of PI with Saponin looks dimmer than with Staurosporine. Are the color scalings similar? It might be that the necrotic process itself causes reducing binding of PI and is not related to the presence of NFL.

      With regards to this question, it is important to note that Annexin V and PI imaging was done in living cells. To obtain the corresponding anti-NFL signal as shown in Fig 2c,d we had to fix the neurons, perform immunocytochemistry and identify the same field of view. We tried to do the same procedure after saponin treatment (Supplementary Figure 5d) but the correlative imaging was very difficult due to the detachment of neurons from the coverslip after the saponin treatment. For this reason, we could not identify the same field of view co-stained with NFL. However, other fields of view did not show NFL fragment accumulation. This could also be the consequence of the high saponin concentration that we used as we discuss above. We have also noticed the reduced intensity of PI binding in the nuclei of saponin-treated neurons. However, if the necrotic process itself reduces the binding of PI to the DNA, then all of the neurons treated with spermine NONOate would have an equally low PI signal. In our experiments, only the nuclei which contained NFL accumulations had a low PI signal, while the signal of NFL-negative nuclei was higher (as shown in Fig2d). We would also like to point out again that the saponin treatment was our control of the PI’s ability to penetrate cells and bind the DNA, as well as our imaging conditions, and not the control of the necrotic process itself. This is the reason why we didn’t go into details about neuronal morphology and NFL localization upon saponin treatment. We thank the reviewer for pointing this out since it prompted us to reevaluate what we wrote in the corresponding paragraph of the manuscript. We realized that the confusion might stem from our explanation of the AnnexinV/PI assay controls in the lines 196-198 (“Additional control experiments in which neurons were treated with 10 μM staurosporine (a positive control for induction of apoptosis) or with 0.1% saponin (a positive control for induction of necrosis) confirmed the efficiency of the annexin V/PI assay (Supplementary Fig. 5c,d).”). We will modify this portion of the text to clearly state that staurosporine and saponin treatments were controls of the AnnexinV and PI binding to their respective targets and not of the apoptosis/necrosis process. When it comes to the saponin treatment, our intention was only to permeabilize the membranes in order to allow PI penetration and DNA binding and not to induce necrosis or to mimic the effect of the spermine NONOate. We also intend to perform experiments with lower concentration of saponin to try to address this experimentally in addition to the text modifications.

      Fig3d • Please show similarly scaled images from controls for proper comparison

      We will show similarly scaled images of the control neurons so that they can be properly compared. They were initially not scaled the same for visualization purposes, but we will modify this in the revised manuscript.

      • How do the authors scale the degree and kinetics of induced damage between application of hydrogen peroxide/CCCP and glutamate toxicity? Does glutamate toxicity take longer to affect the cell, not allowing enough time to accumulate NFL fragments in the nucleus?

      It is challenging to scale the degree and kinetics of induced damage with different stressors. That is why we did not intend to do this. Instead we set different injury conditions based on the published literature. That is why can only speculate when it comes to this. In this regard, it can be that the glutamate toxicity takes “longer” to affect the cells even though it is very difficult to compare them on a timescale, especially when considering different mechanisms of action. We will discuss this limitation in the revised manuscript.

      Fig4B • Some groups (like NO and NO + emricasan) have much larger numbers of close to 0 intensity, compared to the control group. Why?

      We were wondering the same when we analyzed the data. The fact that our nuclear fluorescence intensity analysis picked up NFL signal in control neurons which had no nuclear NFL accumulation made us realize that the intensity measured in the nuclei of control group comes entirely from the out of focus fluorescence – from neurofilaments in cell bodies, dendrites and axons (an example can be seen in the FigS6). That is why we presented the corresponding data with a cut-off value based on the control signal (as mentioned in lines 238-240). Since the oxidative injury causes NFL degradation (not only in neuronal soma, but also neuronal processes), the overall fluorescence intensity of the NFL immunocytochemical staining is reduced in injured neurons. We can see that in all of our images. Consequently, there is no contribution of out of focus fluorescent signal to the measured fluorescence intensity in the majority of nuclei. Due to that, the nuclei without NFL accumulation (at least 40% of injured nuclei) will appear to have a close to 0 intensity of the fluorescent signal. We will discuss and clarify this additionally in the revised manuscript.

      • Please add the ratio of above/below threshold (50/50 obviously in controls)

      We will update the figure in the revised manuscript.

      • The description of the CTCF value calculation seems a little... muddled? Several parameters are described whereas "integrated density" is not even used. Why not simply mean intensity of nuclear ROI-mean intensity of background ROI?

      We included the integrated density in the description since it is measured together with the raw integrated density and can also be used for the CTCF value calculation. However, since we didn’t use it for the CTCF calculation, we will remove it from the corresponding section of the manuscript. We calculated the CTCF value instead of calculating mean intensity of the nuclear ROI - mean intensity of the background ROI, since the CTCF value also takes into account the area of the ROI and not just the mean intensity.

      • Also, please tell me if the areas for nuclear ROIs change, as I noted for Fig1A/B

      We will include this information in the revised manuscript.

      • To make sure that one of the 3 experimental repeats didn't skew the results, please show the median fluorescence intensity for each individual experiment to clarify that the supposed effect is repeated across experiments.

      We have already noticed that in the earliest of the three experiments overall fluorescence intensity was higher, but this was consistent across all the experimental groups and did not skew the results or affect the overall conclusion. However, we will double-check this and revise the figure.

      • From the text "...and due to the NFL degradation during injury...": this seems to contradict the process? Either the NFL fragment accumulates in the nucleus or it is degraded during injury. And isn't the degradation through calpain what supposedly allows this fragment of NFL to go to the nucleus in the first place? I reckon that the authors are possibly trying to reconcile why there are many close-to-0 intensity nuclei in the NO and NO + emricasan groups, but I don't feel the explanation given here fits.

      As we tried to explain in our response above, we think that the overall degradation of neurofilaments in neurons affects the fluorescence intensity originating from the out of focus neurofilaments. Therefore, the nuclei without NFL accumulation in injured conditions have a close to 0 fluorescence intensity. Additionally, we think that this is not an either/or situation, but that both degradation and nuclear accumulation of NFL happen simultaneously. We also think that degradation of axonal NFL and the transport of its tail domain to the soma will at least partially contribute to the accumulation in the nucleus. In any case, degradation and nuclear accumulation seem to be differentially regulated in individual neurons, as some of them show nuclear NFL accumulation and some not. Furthermore, calpain and other mechanisms could also cause NFL degradation up to the point at which these fragments can no longer be recognized by the anti-NFL antibody leading to the loss of signal. We will try to clarify this in the revised version of the manuscript.

      Fig5 • Does the distribution of this GFP in B match any of the various antibody stainings of different NFL fragments? Perhaps this is still a valid fragment of NFL, just not picked up by any AB?

      The GFP signal in B appears rather homogenous and it does not match any of the various antibody stainings of different NFL fragments. As the reviewer points out, this could also be a valid fragment of NFL fused to GFP that none of our antibodies is recognizing. We will clarify this in the revised manuscript.

      • "... and was indistinguishable from the full277 length NFL-GFP." Based on what parameters?

      We will clarify this in the revised text, but we meant in terms of overall neurofilament network and cell appearance, which is commonly used to test the effect of NFL mutations.

      • The authors claim that b is different from d, but I am not convinced. I would like to see a time dependent curve from multiple cells showing a differential change in nuclear and cytosolic GFP signal.

      As we also wrote in the manuscript, in the majority of neurons that were monitored during injury we were not able to detect an increase in the GFP fluorescence intensity in the nucleus. This is what prompted further experiments with NFL(ΔA461–D543)-FLAG. We will clarify this additionally in the revised manuscript and perform line profile intensity measurements to show the difference in nuclear and cytosolic GFP signal.

      • Secondly, the somatic GFP intensity for NFL increases for full length NFL-GFP. How is this explained, if it is only a separation of NFL and GFP? If anything, GFP should float away. And if the answer is that NFL is recruited to the nucleus, you showed that inhibition of calpain activity partially prevents that. So, if calpain activity is necessary for the transport of NFL to the nucleus, then wouldn't it also cut the GFP from NFL before it reaches the nucleus?

      We thank the reviewer for bringing this up and we apologize for the confusion. This can be explained by the fact that the images were scaled in a way that the GFP signal over time could still be seen easily (i.e. differently across different time points which we unfortunately forgot to mention in the figure legend). In the revised manuscript, we will either scale the images the same or we will alternatively show the displayed grey values in individual panels.

      Fig6 • It is recommended to overlap the transfected cells with a stain for endogenous NFL to show that despite the absence of the FLAG-tag, there is still NFL.

      We did not overlap the anti-NFL with anti-FLAG and SYTO16 staining, due to the space constraint and the intent to clearly show the overlap of FLAG and SYTO16 signals in the merged images above the graphs. However, the line profile intensity measurements were done in all three channels and show that despite the absence of FLAG, there is still NFL in the nucleus (Fig6b), or that both FLAG and NFL are present in the nucleus (Fig6d, NFL signal shown in gray). However, as this is not obvious and can easily be overlooked, we will show the endogenous NFL staining overlap in the revised version of the manuscript.

      Fig7 • „ ...all disrupted neurofilament assembly...": this sounds like the staining for native NFL supposedly shows a distortion due to a dominant negative effect of the expression of these constructs? Please clarify.

      Yes, we were referring to the disruption of neurofilament assembly due to a dominant negative effect of the expression of NFL domains. We will clarify this in the revised version of the manuscript.

      Discussion: • The authors show that after overepression of the head domain only, it possibly passively diffuses into the nucleus even in the absence of oxidative injury. However, it seems to be suggested as well that the head domain would not be freely floating around if it wouldn't be for increased calpain activity as a result of oxidative injury in the first place. Therefore, a head domain fragment localized in the nucleus would still more prominently happen upon oxidative injury and interact with DNA through prior identified putative DNA interaction sites from Wang et al. Please comment.

      That is correct. Upon injury and calpain cleavage, it is conceivable that a fragment containing the NFL head domain would also be present in the cell and could potentially diffuse to the nucleus and interact with the DNA. However, by staining injured neurons with an antibody that recognizes amino acids 6-25 of the NFL head domain, we were not able to detect an NFL signal in the nucleus (FigS2a,b). It could be that either the NFL head domain does not localize in the nuclei upon injury, or that the fragment localizing in the nucleus does not contain amino acids 6-25 of the NFL head domain. As the putative DNA-binding sites described by Wang et al involve 7 amino acids located in the first 25 residues of the NFL head domain, we would expect to detect it with the aforementioned antibody. However, as that was not the case we speculated that the interaction of NFL and DNA occurs differently in living cells, as opposed to the test tube conditions utilized by Wang et al. We will comment and clarify this in the revised version of the manuscript.

      • Reviewer #2*

      • Major Comments:

      • The initial data presented in the paper is good, does response of oxidative damage with proper controls, testing the antibodies to NF-L and etc. (Fig. 1-Fig. 4). *

      We thank the reviewer for their positive feedback.

      1. The evidence for calpain involvement in NF-L cleavage during oxidative damage is missing. Provide the evidence for full length NF-L construct and deletion mutants transfected into cells by immunoblot for cleavage of NF-L, perform nuclear and cytoplasmic extract preparations and show that enrichment of the tagged cleaved NF-L fragment in nuclear fraction.

      We thank the reviewer for their comments and suggestions. Since we saw in our microscopy experiments that calpain inhibition reduced the accumulation of NFL in the nucleus, and since it is known that NFL is a calpain substrate (Schlaepfer et al., 1985; Kunz et al., 2004 and others), we did not perform additional experiments to confirm the involvement of calpain in NFL degradation during injury. However, to strengthen our findings, we intend to perform the suggested experiments and include the results in the revised manuscript.

      1. Show calpain activation during oxidative damage by performing alpha-Spectrin immunoblots identify calpain specific 150-kda Spectrin and caspase specific 120-kDa fragment generation in these cells. Also, calpain activation can be measured by MAP2 level alteration and p35 to p25 conversion. Without this evidence it's very hard to believe if the calpain activity is increased or decreased during oxidative damage and these markers are altered by using calpain inhibitors.

      To confirm the calpain activation, we intend to perform anti-alpha spectrin and/or anti-MAP2 blots in lysates of control and injured neurons and include the results in the revised manuscript.

      1. The premise that NF proteins are absent in cell bodies and present only in axons is not correct. It has been demonstrated by multiple investigators that NFs are present in the perikaryon and dendrites of many types of neurons (Dahl, 1983, Experimental Cell Research)., Dr. Ron Liem's group showed NF protein expression in cell bodies of dorsal root ganglion cells (Adebola et ., 2015, Human Mol Genetics) and also showed N-terminal antibodies for NF-L, NF-M and NF-H stain rat cerebellar neuronal cell bodies and dendrites (Kaplan et al., 1991, Journal of Neuroscience Research) when NFs are less phosphorylated. (Schlaepfer et al., 1981, Brain Research) show staining of cell bodies of cortex and dorsal root ganglion cell bodies with NF antibody Ab150, and Yuan et al., 2009 in mouse cortical neurons with GFP tagged NF-L.

      We are not sure what the reviewer is referring to since we cannot find a corresponding section in which we claim that NF proteins are absent in cell bodies. We wrote the following “Anti-NFL antibody staining of neurons treated with the control compound showed the expected neurofilament morphology, that is, a strong fluorescence intensity in axons and lower intensity in cell bodies and dendrites (Fig. 1a)” in our results section (lines 119-121), but the claim we were trying to make there was that NF proteins are particularly abundant in axons. We will clarify this in the revised manuscript.

      1. Quantifying NF-L signal or tagged NF-L fragment signals in the cell body by ICC has many problems and making conclusions. It's extremely difficult to have control over levels of proteins in transfected overexpression models and comparing two or three different constructs with each other by ICC. Not every cell expresses same levels of protein in transfected cells and quantifying it by ICC again has a major problem. This can be addressed if there are stable lines that express equal levels of protein in all cells that comparisons can be made. Under thesese circumstances validation of the hypothesis presented in the study has no strong direct evidence to demonstrate that calpain is activated and NF-L fragment translocate to the nucleus.

      We agree that the results from overexpression-based experiments should be interpreted with caution as levels of expression vary between the cells. We intend to discuss this in the revised manuscript. However, we find it difficult to experimentally address this comment since we are not sure which specific experiments the reviewer is referring to. With regards to this, we would like to emphasize that most of the initial experiments in which we observed NFL accumulation in the nuclei of injured neurons were based on the ICC labeling of endogenous NFL and didn’t involve its overexpression. This includes labeling of endogenous NFL in various types of neurons, comparing the effects of different types of oxidative injury, as well as testing the effects of calpain inhibition on the observed nuclear accumulation (Figures 1-4; Supplementary Figures 1-6). We later resorted to the overexpression experiments in primary neurons (Figures 5-7; Supplementary Figure 7, 10) to gain more information about the identity of NFL fragment which was detected in the nucleus. Due to the low transfection efficiency of primary neurons, we performed an additional set of overexpression experiments in neuroblastoma ND7/23 cells (Figure 8; Supplementary Figures 8,9) and obtained similar results in a higher number of cells. We agree that having stable cell lines which e.g. express same levels of NFL domains would be a more elegant approach and we intend to make them for our follow-up studies, however the generation of said stable cell lines might be beyond the scope of this revision. Furthermore, looking at our data with overexpression of NFL domains in ND7/23 cells (Supplementary Figure 8,9), it appears to us as if different domains are rather homogenously expressed in different cells. While the expression levels might vary, it seems that they all show the same trend when it comes to their localization (which was the main point of those experiments).

      1. The interpretation that NF-L preventing DNA labeling cells is misinterpretation. NFs have very long half-life compared to other proteins. Due to oxidative damage, DNA is degraded in the cells but NFs that have very long half-life you see as NFs rings in the dead cells. So, NFs do not prevent DNA labeling, but DNA or chromatin is degraded in dead cells.

      We thank the reviewer for their useful insight. DNA degradation could certainly be the reason why we observe a lower fluorescence intensity of the propidium iodide fluorescence in the nuclei of injured neurons. We intend to discuss this in the revised manuscript. However, if the DNA degradation is the only reason for the lower PI fluorescence intensity, then the PI fluorescence intensity would be the same in all injured nuclei. In our experiments, we saw the reduced PI fluorescence intensity in nuclei that contained NFL accumulations and not in other nuclei. Additionally, we observed a reduction of SYTO16 fluorescent labeling of nuclei which contained accumulations of the NFL tail domain, even in the absence of oxidative injury. Due to these reasons we speculated that NFL accumulation in the nucleus might hinder nuclear dyes from interacting with the DNA. But this is only a speculation and we will try to clarify this further in the revised manuscript including alternative explanations.

      Minor comments: 1. In the introduction on page 4 reference is missing for NF transport, aggregation and perikaryal accumulation (on line 93).

      We will add a reference to the revised manuscript.

      1. The statement in discussion on page 14 line 454 for Zhu et al., 1997 study is not accurate. It should be modified to sciatic nerve crush not spinal cord injury.

      We will correct this mistake in the revised manuscript.

      1. What is the size of the calpain cleaved NF-L tail domain? If you perform immunoblots on cell extracts treated with oxidative agents one would know it.

      We will perform immunoblots on cell lysates and incorporate the corresponding results in the revised manuscript.

      1. Authors could make their conclusions clear. This is particularly true for the experiments in Figure 4 panels c and d. It is very difficult to understand the conclusions of the experiments. First state the expectation and then described whether the expectation is true or different.

      We will do as the reviewer suggested in the revised manuscript.

      1. The ICC images are at extremely low magnification. They should be shown at 100x or 120x so that details of the cell body and the nucleus can be seen.

      Our intention was to show larger fields of view and wherever appropriate insets, but we will try to improve this in the revised manuscript by either zooming in, cropping or adding additional insets with individual cell bodies and nuclei. In general, images were taken with an optimal resolution/pixel size in mind for any of the used objectives (60x/1.4 NA or 100x/1.49 NA) and we can easily modify our figure panels to show more details.

      1. Oxidative damage leads to beaded accumulation of NF-L in neurites and axons. Authors should address this issue.

      We will discuss this in the revised manuscript.

      1. The combination treatment of the inhibitors (last 3 sets of the Fig. 4 b) has no statistical significance should be removed.

      Actually, these differences were statistically significant (Supplementary Table 1). For clarity and as described in the figure legend (line 516: “The most relevant significant differences are indicated with an asterisk”) we showed only a subset of them on the graph, but we will change this in the revised manuscript.

      1. Why only two antibodies recognize cleaved NF-L? If the antibodies at directed at tail region, they should recognize it unless the phosphorylated tail at Ser473 may inibit the antibody binding. In that case NF-L Ser473 specific antibody (EMD Millipore: MABN2431) may be used to test this idea.

      This is a very good point that we also wonder about. Even if all antibodies are directed at tail region, exact epitopes are not described for all of them. That makes it also difficult for us to understand and speculate on this. However, we have already ordered the new antibody as suggested by the reviewer and we will experimentally test it.

      **Referees cross-commenting**

      I agree with the reviewer#1 about presenting the quantification data for the indicated figures to make conclusions strong and see how much of variation is there among sampled cells.

      As discussed in our response to reviewer #1, we will provide additional quantifications.

      3. Description of the revisions that have already been incorporated in the transferred manuscript

      4. Description of analyses that authors prefer not to carry out

      Reviewer #2, major comment 7. Authors could do chromatin immunoprecipitation (chip) analysis to identify NF-L binding sites on chromatin and perform gel shift assays to show NF-L tail domain binding to specific consensus DNA sequences.

      We thank the reviewer for their suggestion. We are very interested in performing additional experiments and identifying the NFL binding sites on the DNA (either by chromatin immunoprecipitation or DamID-seq) and we intend to perform these experiments as soon as possible. Unfortunately, at the moment we do not have the expertise to perform such experiments in our lab. Instead, this type of follow-up project requires establishing a collaboration which is beyond the scope of this revision.

  6. Jun 2022
    1. can someone explain to me the relationship between Luhmann's numbering and the "categories" of Wikipedia (1000-6000)? I can't find the video where Scott explains that the first number used by Luhmann for the entry note is of the order of thousands and that it indicated a general category?

      Since I just happen to have an antinet laying around 🗃️😜🔎 I can do a quick cross referenced search for antinet, youtube, and numbering systems to come up with this: https://www.youtube.com/watch?v=MrjUg4toZqw.

      Hopefully it's the one (or very similar) to what you're looking for.

      Since it was also hiding in there in a linked card, an added bonus for you:

      "Here I am on the floor showing you freaking note cards, which really means that I have made it in life." —Scott P. Scheper

    1. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #1

      Evidence, reproducibility and clarity

      Summary:

      Here, the authors investigate mitochondrial x nuclear x diet interactions in Drosophila melanogaster. They start by constructing D. melanogaster populations comprising fully-factorial combinations of mitochondrial and nuclear genomes from Australia, Benin, and Canada. This creates discrete populations with distinct combinations of mitochondrial and nuclear genomes. These populations are then exposed to various diets, notably a control diet, a diet high in essential amino acids (which promotes fecundity), and a diet high in plant-based lipids (which represses fecundity). They first screened for evidence of repeatable mitonuclear effects on fecundity, which they found. They then screened for additional fitness traits looking for influence of mito-nucleotype on response to chronic vs parental dietary changes. Once again, they were able to find evidence of phenotypic variation dependent on mito-nucleotype; however, the effect of mito-nucleotype on traits was variable. Certain mito-nucleotypes exhibited lower to near lethality progeny counts after amino acid feeding (which is thought to promote fecundity), while others exhibited normal counts. When quantifying the size of various effects, they found that mitonucleotype interactions often had comparable effect size to that of diet:mitotype interactions, diet:nucleotype interactions, and diet on its own. Finally, they were able to associate this mito-nucleotype interaction with a mt:lrRNA C/T polymorphism that had nucleotype-dependent effects on fertility.

      Major Comments:

      I find the key conclusions convincing, and I feel as if though the authors sufficiently show that there are mitonucleotype x diet interactions on fecundity/fertility traits. Furthermore, their ability to associate the phenotypic variation with a mt:lrRNA polymorphism is also of interest. I do not feel as if though the authors make claims that lack support in the paper. I do not feel as if though additional experiments would be essential to support their claims; however, I would be remiss not to acknowledge that the bulk of the experimental results came down to a 2 x 2 population exploration (Australia-Australia, Australia-Benin, Benin-Australia, Benin-Benin). This study would have been aided by more mitonucleotypes; however, I understand that to generate additional mitonucleotypes would have taken additional time and resources. As most of my comments are minor and should be easily addressed by the authors, I place them in the following section.

      Minor Comments:

      Below are minor comments and questions I had while reading the manuscript. I apologize in advance if any of my questions are answered in the main text, and I missed them while reviewing.

      • Page 3, line 35 - you state that variation in mitochondrial function can contribute to variation in dietary optima. I would add a citation here.

      • Page 4, line 74 - If possible, I would add a little more clarifying detail regarding the crossing scheme to the main text. I would just be clear that the generation of these mitonucleotypes takes advantage of the maternal transmission of the mitochondrial genome by crossing virgin females from population 1 to males from population 2, followed by continual backcrossing of virgin females each generation to males from population 2. You go into specific details in the materials & methods section, but I was surprised not to see it earlier. I know this is probably more detail, but I think it's better to be very clear about the crossing scheme used.

      • Page 5, line 90 - I would move the citations for the fact that enriching essential amino acids promotes fecundity from the supplement to the main text. You say that this is an established manipulation, and it would be best to point to examples of this manipulation, especially as some of your results find that fertility is negatively affected.

      • Page 8, line 129 - In this paragraph, you speak about the impacts of chronic EAA feeding in the AA, BA, and AB mitonucleotypes, but you do not coment on BB? Is there a reason for this? (This connects to one of my comments later)

      General Questions - This may be outside the scope of your study, but I was surprised that there was no commentary on co-adaptation (or the potential lack of) between the mitochondrial and nuclear genome in the discussion. It's not immediately clear to me how isolated and for how long the Australian and Benin populations are, but I would expect there to be co-evolution/co-adaptation between the mitochondrial and nuclear genomes. Consequently, I would have expected AA and BB populations to show elevated fitness values; however, it actually seems as if though the AA mitonucleotype performs the worst when given the chronic EAA diet. I wonder if you could comment on the co-evolutionary potential between these two genomes.

      Significance

      • It is becoming increasingly clear when studying mitochondrial x nuclear interactions that the environmental context of the study can significantly influence the results (see Camus et al. 2019, Rodriguez et al. 2021, and Towarnicki and Ballard 2018). This study furthers our understanding of mitochondrial x nuclear x environment interactions by exploring fully-factorial combinations of mitonucleotypes on two distinct diets (enriched for essential amino acids or plant-based lipids) and evaluating the fertility/fecundity of the different mitonucleotypes. They were able to identify a signle mt:lrRNA SNP that was associated with the measured phenotypes, raising the question of whether and how mitochondria-derived regulatory factors influence phenotypic variation. I also find the comparison to the omnigenic model compelling, particularly their commentary that mitochodnrial genes could contribute to the "core gene" set for several notable phenotypes.

      • I believe that this work would be of interest to those obviously studying the evolution and importance of mitochondrial x nuclear x environmental interactions, and I also believe that this work would be of interest to those interested in the effects of various nutrients/diets in both Drosophila and humans.

      • My expertise is as a theoretical population and evolutionary geneticist. I am primarily interested in genetic conflict, including between the mitochondrial and nuclear genome. I am interested in how these two genomes evolve to both cause and resolve genetic and sexual conflict.

    1. Population control groups on the left often will claim that they are concerned witheliminating gender and economic inequalities, racism and colonialism, but since theseorganizations address these issues through a problematic over-population paradigm, inevitablytheir efforts are directed toward reducing population growth of all peoples in theory and ofpeople of color in reality.21

      GRADED

      C: I think it's funny how there is an actual "population control group" who job is in reducing overpopulation growth for people of color, but try to disguise their malicious intentions by stating that they are concerned with eliminating gender and economic inequalities, racism and colonialism.

      Q: How can they say they are trying to reduce racism when they are being racist by focusing on reducing the population growth in color people?

      That's just bs, this article stated "For instance, a study was published in 2001 claimed that the 1973 legalization of abortion permitted poor women of color to terminate their pregnancies, thereby preventing the birth of their unwanted children who were likely to have become criminals".

    1. (anyway, it's interesting that these days there isn't really a strong brand association for sunscreen, at least not in the US. It's one of those rare fully genericized products; no one cares what brand of sunscreen they get, it could be the cheapest possible product on the shelf as long as it has a high SPF. One cynical point of view would be that this is a market niche just waiting for someone with a clever marketing idea to swoop in and make a name for themselves.

      I don't think this is true! A lot of people care about sunscreen in three ways:

      1. People who wear makeup have a real hard time with it not playing nicely with SPF, especially those higher SPFs, especially especially if water resistant. The exact texture ends up being important.
      2. There are real safety concerns and then also people getting ridiculous about particular ingredients, so some people get really picky about what's in their sunscreen.
      3. Fragrance. I cannot stand the smell of a lot of sunscreen, but I am a Real Pale Person so I can't just ignore it. Hawaiian Tropic has a loud floral scent to cover it up; the Supergoop play stuff smells like it's trying to cover it up with lemon candy; there are Whole Foodsy herbal variants; etc. etc.

      Sunscreen is one of the harder things to launch because of the testing involved; innovative formulations available in Asia and Europe are often unavailable in the US for that reason. However, there definitely still is someone trying to make a brand out of it: the Poolsuite.FM people have launched Vacation sunscreen (full screen video so won't play nice with that mobile data limit).

    1. We must  hide "technical" information that might overwhelm our users

      Why do applications change without informing the user?

      I suspect that many of these "silly" or "fake" bug fixes are to cover up serious engineering errors - if the app admits flaws or faults, people will be less likely to use it going forward. Strict improvements are fine to post in "what's new", but bugs? Given that end users can't do anything about them, it makes sense to obfuscate them - not to avoid overwhelming consumers, but rather to save face and prevent loss of sales.

      Sorry, but the angle here likely isn't to obfuscate changes from the user - at least in this specific message and others like it. It's to obfuscate frightening flaws from terrible software development practices from end users. (Sometimes, also, teams just don't know what was wrong but found a fix for it in a roundabout fashion - it crashes daily? cron job! - and I would honestly rather not know how often this is done).

    1. Reviewer #2 (Public Review):

      Here the authors address the connections by which medial prefrontal cortex (PFC), a frontal brain area that provides input to the limbic system, targets nucleus accumbens (NAc, a ventral striatal region) and the ventral tegmental area (VTA, a midbrain dopaminergic region involves in reward). They combine chemical retrograde tracers and conditional viruses to study connectivity. The data suggest that PFC projections to NAc and VTA are mostly from separate cell types, since these outputs (a) originate in different layers of PFC, (b) express different biochemical markers (making these cell types molecularly distinct), and (c) have minimal overlap (in, for example double retrograde label experiments). Thus, the authors show that PFC outputs to two different limbic system components come from two different parts of the PFC circuitry, thus potentially conveying different information to subcortical brain areas.

      The study is done with high technical precision. The figures convey the findings and the clarity of thought effectively. Overall, I am convinced that the data support the conclusion that NAc and VTA projecting cells within PFC have "different laminar distribution (layers 2/3-5a and 5b-6, respectively) and ... different molecular markers". The larger claim that the authors "deliver a precise, cell- and layer-specific anatomical description of the cortico-mesolimbic pathways" is mostly accomplished. I feel this would be stronger if the different regions of the PFC were treating as distinct instead of one entity, as the output to VTA and NAc in each of the (potentially different) areas (PrL, IL, Cg, MO, &c) might differ in cell type and layer and such regional differences across cortex are addressed in other studies. This study contributes to understanding the anatomy underpinning earlier work that addresses the distinct functional and behavioral roles of NAc and VTA-projecting neurons.

      The scope of this work is somewhat smaller than the recent reconstruction and molecular subtyping of ~6300 neurons performed by others: a comprehensive paper on this very topic ("Single-neuron projectome of mouse prefrontal cortex", Nat Neurosci 25:515):<br /> https://www.nature.com/articles/s41593-022-01041-5<br /> These reconstructed individual axons originate across a range of mouse PFC areas, and the paper quantifies their targets and classifies them into 64 cell types defined by projection class. To some extent, this covers the IT types that project to striatum/nucleus accumbens (Fig 3) and provides a distribution of where they reside within the different PFC areas (Al, MO, M2, etc ... ). (See the top and bottom of figures 3a and 4a for many PT-types). Furthermore, the full transcriptome of all these cell types is examined and compared to projection pattern (Fig 7+). For what it's worth, playing with the visualization tool confirms the main points in the current manuscript:<br /> https://mouse.braindatacenter.cn/<br /> Displaying cells in a given IT-type projection group (group 21, I tried the first 20 cells) that project to NAc, confirms they don't project to VTA. Displaying cells in a given PT-type projection group (group 57, I tried the first 20 cells) that project to VTA, confirms they don't project to NAc. Just blown away by this, didn't even take 15 minutes to use it. I am not sure how to suggest that the current paper address this (e.g. how can they differentiate what they're showing from this somewhat complete projectome of IT and PT-type cells?), but this work should at least be pointed to in terms of addressing many of the same issues. If a counter-example to the current work is desired, look at cell group 59 (Fig 4a suggests this population projects to both ACB and VTA; examination of these cells with the visualization tool suggests there are ~122 examples of cells that project to both to some small extent.)

      Major points:<br /> The PFC areas studied here may include a heterogeneous group that differs in stereotaxic location, laminar organization, and projection pattern. In "Anatomical considerations" in the discussion, Line 547: "the exact definition of the PrL subregions greatly varies between publications, just like the distinction between dorsal and ventral mPFC. Such inaccuracies can contribute to the still abundant contradictions in the literature and complicate the proper interpretation of the results." I agree. But what I think is lacking here is a characterization in a region-by-region manner of the laminar organization of the cell types you either identify by retrograde label (CAV-Cre anatomy, for example) or by molecular approaches (how the lamination of Ntsr1+ neurons vary between the areas you lump together here as PFC).

      I think this subdivision might help by defining these areas in stereotaxic coordinates and giving some idea of how defined cell types (defined by Cre driver or retrograde label or other marker) might vary in their laminar distribution across these areas. Maybe I am wrong, but my perception of Fig 1 and 2 is mainly that the laminar pattern of cortical labeling from VTA and NAc varies somewhat depending on where you assess it in cortex.

      The degree to which the result is novel depends somewhat on the credence given to prior efforts to unravel this connectivity (Line 494-502). In addition to the single axon reconstructions mentioned above, retrograde tracing with CAV-Cre (and HSV-flp) suggested that the PFC populations projecting to VTA and NAc were anatomically and molecularly distinct (Kim et al., 2017 Cell), with the VTA projections originating from neurons in deeper layers (further from the midline, Fig. 1) - as shown here. They do show that mPFC output has unique laminar origin (PFC-to-NAc is L5A, PFC-to-VTA is L5B, the retrograde tracing of Fig 1) and some molecular differences (VTA outputs express CTIP2, TCERG1L, and CHST8; NaC outputs express NPTX2, NRN1, and SCCPDH). This work devotes far less effort to the anatomical characterization that is presented quite beautifully here, instead addressing behavioral roles for these populations. Further, there is some prior work to suggest overlap in a subset of layer 5 cells (For example, NAc and VTA projecting neurons shown in rats (not mice as here); Gao et. Al, 2020 Neurobio Dis.).

    1. There is no single right way to build a Second Brain. Your systemcan look like chaos to others, but if it brings you progress anddelight, then it’s the right one.

      All this description and prescription, then say this?!

      I'll agree that each person's system should be their own and work for them, but it would have been more helpful to have this upfront and then to have looked at a broad array of practices and models for imitation to let people pick and choose from a variety of practices instead of presenting just one dish on the menu: P.A.R.A. with a side of C.O.D.E.

    2. As powerful and necessary as divergence is, if all we ever do isdiverge, then we never arrive anywhere.

      Tiago Forte frames the creative process in the framing of divergence (brainstorming) and convergence (connecting ideas, editing, refining) which emerged out of the Stanford Design School and popularized by IDEO in the 1980s and 1990s.

      But this is just what the more refined practices of maintaining a zettelkasten entail. It's the creation of profligate divergence forced by promiscuously following one's interests and collecting ideas along the way interspersed with active and pointed connection of ideas slowly creating convergence of these ideas over time. The ultimate act of creation finally becomes simple as pulling one's favorite idea of many out of the box (along with all the things connected to it) and editing out any unnecessary pieces and then smoothing the whole into something cohesive.

      This is far less taxing than sculpting marble where one needs to start with an idea of where one is going and then needs the actual skill to get there. Doing this well requires thousands of hours of practice at the skill, working with smaller models, and eventually (hopefully) arriving at art. It's much easier if one has the broad shapes of the entirety of Rodin, Michelangelo, and Donatello's works in their repository and they can simply pull out one that feels interesting and polish it up a bit. Some of the time necessary for work and practice are still there, but the ultimate results are closer to guaranteed art in one domain than the other.


      Commonplacing or slipboxing allows us to take the ability to imitate, which humans are exceptionally good at (Annie Murphy Paul, link tk), and combine those imitations in a way to actively explore and then create new innovative ideas.

      Commonplacing can be thought of as lifelong and consistent practice of brainstorming where one captures all the ideas for later use.


      Link to - practice makes perfect

    3. How to Resurface and Reuse Your Past Work

      Coming back to the beginning of this section. He talks about tags, solely after-the-fact instead of when taking notes on the fly. While it might seem that he would have been using tags as subject headings in a traditional commonplace book, he really isn't. This is a significant departure from the historical method!! It's also ill advised not to be either tagging/categorizing as one goes along to make searching and linking things together dramatically easier.

      How has he missed the power of this from the start?! This is really a massive flaw in his method from my perspective, particularly as he quotes John Locke's work on the topic.

      Did I maybe miss some of this in earlier sections when he quoted John Locke? Double check, just in case, but this is certainly the section of the book to discuss using these ideas!

    4. Our notes are things to use,not just things to collect.

      Many people take notes, they just don't know where they're taking them to. It's having a concrete use for them after you've written them down that makes all the difference. At this point, most would say that they do read back over them, but this generally creates a false sense of learning through familiarity. Better would be turning them into spaced repetition flashcards, or linking them to ideas and thoughts already in our field of knowledge to see if and where they fit into our worldview.

      link to - research on false feeling of knowledge by re-reading notes in Ahrens

    1. Looking for advice on how to adapt antinet ideas for my own system .t3_vkllv0._2FCtq-QzlfuN-SwVMUZMM3 { --postTitle-VisitedLinkColor: #9b9b9b; --postTitleLink-VisitedLinkColor: #9b9b9b; }

      Holiday's system is roughly similar to the idea of a commonplace book, just kept and maintained on index cards instead of a notebook. He also seems to advocate for keeping separate boxes for each project which I find to be odd advice, though it's also roughly the same advice suggested by Twyla Tharp's The Creative Habit and Tiago Forte's recent book Building a Second Brain which provides a framing that seems geared more toward broader productivity rather than either the commonplacing or zettelkasten traditions.

      I suspect that if you're not linking discrete ideas, you'll get far more value out of your system by practicing profligate indexing terms on your discrete ideas. Two topical/subject headings on an individual idea seems horrifically limiting much less on an entire article and even worse on a whole book. Fewer index topics is easier to get away with in a digital system which allows search across your corpus, but can be painfully limiting in a pen/paper system.

      Most paperbound commonplaces index topics against page numbers, but it's not clear to me how you're numbering (or not) your system to be able to more easily cross reference your notes with an index. Looking at Luhmann's index as an example (https://niklas-luhmann-archiv.de/bestand/zettelkasten/zettel/ZK_2_SW1_001_V) might be illustrative so you can follow along, but if you're not using his numbering system or linking your cards/ideas, then you could simply use consecutive numbers 1, 2, 3, 4, ..., 92000, 92001, ... on your cards to index against to easily find the cards you're after. It almost sounds to me that with your current filing system, you'd have to duplicate your cards to be able to file them under more than one topic. This obviously isn't ideal.

    1. Healthy People 2030 Resources

      Love the links. Just a click to get more information. I believe students will be more likely to use the links now that it's so easy to get to

    1. Reviewer #3 (Public Review):

      This manuscript wades into a research area that has risen to prominence during the COVID-19 pandemic, namely the estimation of time-varying quantities to describe transmission dynamics, based on case data collected in a given location. The authors focus on the interesting and challenging setting of low-incidence periods that arise after epidemic waves, when local spread of the virus has been contained, but new cases continue to be seeded by travelers and local spread potential can change as control measures are relaxed. There are important questions that arise in this context, such as when it is safe to declare the pathogen locally eliminated, and how to detect a flare-up quickly enough to stamp it out.

      The authors propose a new framework, made up of a smoothed estimate of the local reproductive number, R, and another quantity they call Z, which is a measure of confidence that the local epidemic has been eliminated. They apply this framework to three public data sets of COVID-19 case reports (in New Zealand, Hong Kong and Victoria, Australia), each spanning multiple waves of infections interspersed with quieter periods when most cases arise from importation. They show how the smoothed R estimates align with the reported case data, and accurately capture periods of supercritical (R>1, so epidemics can take off) and subcritical (R<1, so epidemics wane) local transmission. They also show how the Z metric fluctuates through time, rising to near 100% at a few points which correspond closely to official declarations of elimination in the respective settings. The authors draw some parallels between their inferred R and Z metrics and the changes in control policies on the ground. They also highlight a number of points where the R and Z metrics seem to anticipate changes in the epidemiology on the ground, which are interpreted as advance 'signals' or 'early-warning' of ensuing waves of cases. This interpretation seems to underlie the manuscript's overall framing in terms of 'early-warning signals' that can be used 'in real time'.

      Taken at face value, these are exciting claims that could form the basis of a useful public health tool. However I was not convinced that the framework was actually making these predictions in real time, i.e. strictly prospectively using available data. The approach would still have value if applied retrospectively, particularly with regard to understanding the impact of interventions applied in each setting. To this end, a more formal analysis of the relation between control measures and the R and Z metrics would benefit the paper.

      Strengths

      The paper is exemplary in clearly delineating the roles of importation versus local transmission in shaping case incidence during these low-incidence periods. This is a crucial distinction in this context, which is too often blurred.

      The authors also innovate by bringing a suite of Bayesian filtering and smoothing techniques to bear on inferring R from these data, with the goal of extracting the cleanest signal possible from the noisy data. These approaches are well contextualized relative to more standard techniques in epidemiology, and appear to bear fruit in terms of smooth and stable estimates. However, it is important to note that this manuscript is not the primary report on these methods; the authors have written up this work elsewhere (ref. 16) and it is not described with sufficient detail for this manuscript to stand alone.

      It is an interesting and valuable idea to derive a metric (Z) that explicitly estimates the degree of confidence that the pathogen has been eliminated locally. Again, the present manuscript builds closely on prior work by the authors (ref. 15), with the innovation of blending the earlier theory with the new Bayesian smoothed estimates of R.

      The selected data sets are perfectly suited to the problem at hand, and analyzing three parallel case studies allows for the behavior of the R-Z framework to be observed across contexts, which is valuable.

      Weaknesses

      As presented, the manuscript does not seem to show real-time early-warning signals, as I understand those terms. The forward-backward smoothing algorithms that form the backbone of the study estimate R_s (i.e. the value of R at time s) using case data from both before and after time s. That is, the algorithm relies on knowledge of future events and so it cannot be said to provide early warning in any practical sense. Similarly, the estimates of Z draw upon the same 'smoothing posterior' q_s, so they also rely on future knowledge. (I doubted my understanding of this point, given the strong framing of the manuscript and limited methodological details, but the full explication of the method in ref. 16 is quite clear that the 'filtering posterior' p_s is suitable for real-time estimation, but the smoothing approach is retrospective and requires knowledge of the full dataset.)

      Viewed in this light, the 'early-warning signals' in the Results are actually just smoothing of the yet-to-occur case data, and thus sadly are much less exciting. It did seem too good to be true. If I have understood correctly, then the current framing of the work seems inappropriate - unless the authors can show that R and Z metrics estimated strictly from past data can provide reliable signals of coming events.

      An alternative approach would be to use the framework as a retrospective tool, and use it to build quantitative understanding of the impact of control measures and to revisit the timing of declarations of elimination. Table 1 goes some distance toward describing the relationships between R and Z values and these policy shifts and announcements, but I struggled to pull much value from it. The table and associated text mostly come across as a series of anecdotes where R fell after NPIs were imposed, or rose again when local transmission occurred, but there is no analysis that takes advantage of the more refined estimates of R the authors have obtained with their smoothing approach. One issue is that the time windows included in the table are not contiguous, so all the vignettes feel disjointed.

      As presented, while the concept of the Z metric is attractive, it was hard to discern any conclusions about how to make use of its value. In two of the datasets it rose to near 100%, which is a clear signal of elimination, but as noted these were periods when the WHO rule of thumb (28 days without new cases) was sufficient. At some other points, the authors emphasize the implications of Z dropping close to 0% (e.g. at the top of page 7: on July 5 in Hong Kong, Z  0% despite 21 days without local cases, and the authors highlight the contrast with the WHO rule). However these findings clearly arise from the smoothing of future data mentioned above (i.e. on July 5 in Hong Kong, R is rising to supercritical levels based on advance knowledge of the rapid rise in cases in the next few weeks). Thus these findings are not relevant to real-time decision support. Finally, there are several periods where Z fluctuates around 20-50% for reasons that are hard to discern (e.g. July in New Zealand, or April-May in Hong Kong). The authors write in that the Z score may exhibit a peak due to extinction of a particular viral lineage in Hong Kong, while other lineages continued to circulate. It is hard to grasp how this interpretation could apply, given the aggregated nature of the data; more evidence, or more refined arguments, are needed for this to be convincing.

      In the big picture, the proposed framework is based on two quantities, R and Z, but there is no systematic analysis of how to interpret these two quantities jointly. It would be valuable, for instance, to see how these metrics perform on a two-dimensional R-Z phase space.

      The authors acknowledge a number of assumptions and data requirements needed for this approach, as presented. These include perfect case observation, no asymptomatic transmission, perfect identification of imported versus locally infected cases, and no delays in reporting. The authors state that the excellent surveillance systems in their case-study locales minimize the impact of these assumptions, but the same cannot be said of most other places around the world. Digging deeper into the epidemiology, the distribution of serial intervals (a crucial input to the algorithm) is assumed to be invariant, even though it's been demonstrated to change when interventions are imposed (ref. 26), i.e. exactly the conditions of interest. Finally, superspreading is a prominent feature of the COVID-19 epidemiology (as nicely documented for Hong Kong, by one of the authors), but is not addressed by this model beyond allowing subtle fluctuations in R from day to day. Taken together, these strong assumptions and omissions raise questions about the real-world reliability of this framework. Given that the point of the manuscript is to develop more refined quantitative metrics, and that most of these assumptions will be violated in most settings, it would be valuable to demonstrate that the framework is robust to these violations.

    1. Reviewer #1 (Public Review):

      This study conducted by Akito Otubo et. al has two goals: 1. evaluate the usefulness of using formalin fixed frozen tissue that has been stored for several years, and 2. characterization of arginine vasopressin (AVP)-producing neurons in macaque hypothalamo-pituitary axis using immunoelectron microscopy approaches. The authors seek to follow mouse studies that suggest co-release of glutamate and corticotrophin-releasing factor (CRF) by magnocellular neurosecretory neurons in the supraoptic nucleus (SON) and paraventricular nucleus (PVN) of the hypothalamus. The specific goal being to ask if a similar co-release mechanism occurs in the primate AVP/CRF system.

      The major strength of the results is that they do show antigenicity in formalin fixed tissue, but the major weaknesses listed below leave me unconvinced by their conclusions that, "We found that both ultrastructure and immunoreactivity are sufficiently preserved in macaque brain tissues stored long-term for light microscopy", and thus I do not believe they have achieved their aims. There are three main issues I have: 1. The quality of the tissue is extremely poor as there are numerous membrane breaks making it near impossible to make out cellular structures. For instance, without the antibody staining to guide the eye, I question whether any cellular structures could be made out, 2. it's not stated whether the antibodies used in this study were the ones that just happened to work or if antibodies work universally; such burden of proof is essential if the authors wish to claim that old formalin fixed tissue is of value, and 3. there's a significant lack of controls on two fronts: a. controls with a properly fixed (fresh, glutaraldedhyde, etc.) brain showing antigenicity is similar to the old formalin tissue, and b. negative controls for the co-release model that prove the immunostaining is specific. For example, staining for a protein that shouldn't localize in the PCV (and hence not co-localize with NPII or copeptin). There lacks similar negative controls for the immunofluorescence data. The burden of extensive controls is on these authors if they wish to establish that older tissue is of scientific value. Overall, without controls showing 1. Old formalin fixed tissue compared to fresh tissue show equivalent results, 2. antigenicity is in fact real, and 3. antigenicity is broadly true for several biological markers.

    1. Reviewer #2 (Public Review):

      A summary of what the authors were trying to achieve:

      The authors have developed an approach to prediction of T cell receptor:peptide-MHC (TCR:pMHC) interactions that relies on 3D model building (with published tools) followed by feature extraction and machine learning. The goal is to use structural and energetic features extracted from 3D models to discriminate binding from non-binding TCR:pMHC pairs. They are not the first to make such an attempt (e.g., Lanzarotti, Marcotili, Nielsen, Mol. Imm. 2018), but they provide a detailed critical evaluation of the approach that sets the stage for future attempts. The hope is that structure-based approaches may have better power to generalize from limited training data and/or to model unseen pMHCs.

      An account of the major strengths and weaknesses of the methods and results:

      The authors first report (section 4.1) that their structural and energetic features contain information on binding mode, highlighting complexes with reversed binding polarity, for example, and partly discriminating MHC class I from MHC class II structures. This is encouraging but not terribly surprising. Also, with regard to MHC I vs II discrimination, it is not clear how the class II peptides are registered with respect to one another. This needs to be done by alignment on MHC and mapping of structurally-corresponding peptide positions, since the extent of N- and C-terminal peptide overhangs varies between structures and is largely irrelevant to the docking mode. Interactions between the TCR and MHC are ignored in the feature extraction process; it's possible that including these interactions could improve performance. The authors state: "To be noted that not all structures could be successfully modelled by TCRpMHC models, and so we could not submit them to the feature extraction pipeline." It's unclear what effect this could have on the results: if the modeling failures are cases of structures for which no good CDR templates could be identified, then perhaps this could bias the results.

      Section 4.2 reports a negative result: unsupervised learning applied to the extracted features is unable to discriminate binding from non-binding complexes. This suggests that there is not likely to be a simple energetic feature, such as overall binding energy, that reliably discriminates the true binders. In Section 4.3, the authors turn to supervised learning, in which training examples inform prediction by a classifier. One finding is that the pure-sequence approach using Atchley-factor encoding of the TCR:pMHC outperforms the structure-based approaches, though not by much. A combined model incorporating Atchley factors and structural features does slightly better. These results are a little hard to interpret because we don't know how challenging the 10-fold internal cross-validation is. It doesn't sound like there is any attempt to avoid testing on TCR:pMHCs that are nearly identical to TCR:pMHCs in the training sets, and the structural database is highly redundant, containing many slight variants of well-studied systems. It's also not clear how overlap between the template database used for 3D modeling and the testing set was handled; my guess is that since the model building is an external tool this was not controlled. Together, these factors may explain why the results on independent test sets are, for the most part, significantly worse than the cross-validation results. Another take-home message from the independent validation is that the sequence-only method seems to outperform the sequence+structure or structure-only methods. Although these are described as "out-of-sample validation", it's not clear how different these independent TCR:pMHC examples are from the structure dataset on which the model was trained.

      Sections 4.4 and 4.5 report that prediction accuracy varies significantly across epitopes, and this is in part determined by sequence similarity to the structural database (which provides templates for modeling and also constitutes the training set for the model). In section 4.6, the authors determine that the model does not appear to be able to predict binding affinity (as opposed to the binary decision, binding versus non-binding). Finally, in section 4.7 the authors benchmark the predictor against two publicly available, sequence-based predictors. When predicting for epitopes present in their training sets, all methods do reasonably well, with the edge going to the sequence-based ERGO method. When predicting for epitopes not present in their training sets, none of the methods perform very well. The authors state that "these results suggest that the structure-based models developed in this study perform as well as the state-of-the-art sequence-based models in predicting binding to novel pMHC, despite learning from a much smaller training set." This may be true, but the predictions themselves are not much better than random guessing (AUROCs around 0.5-0.6).

      An appraisal of whether the authors achieved their aims, and whether the results support their conclusions:

      I'm doubtful that the proposed methods will form the basis of a practical prediction algorithm. In the absence of ability to generalize to unseen epitopes, simpler sequence-based approaches that leverage the ever-growing dataset of TCR:pMHC interactions seem preferable. I still think the study has value as a template and roadmap for future efforts, and a baseline for comparison. For me, a key unanswered question is whether the model-derived structural features are just a different, slightly noisier way of memorizing sequence, or actually contain orthogonal information that can enhance predictions. It might be possible to gain insight into this question by looking more carefully at the impact of model-building accuracy on performance (the authors use sequence similarity as a proxy, but this is confounded by overlap between the training set and the template set used for modeling). If model-building really adds something, it seems plausible that it does so by accurately capturing physical features of the true binding mode.

      A discussion of the likely impact of the work on the field, and the utility of the methods and data to the community:

      As state above, I think the present work will have a positive impact on the field of TCR:pMHC prediction by critically evaluating the structure-based approach (and also by testing two previously published methods on independent data). I am less convinced of the utility of the specific methods than of the overall conceptual framework, evaluation procedures, and training/testing sets.

      Any additional context you think would help readers interpret or understand the significance of the work:

    1. I think you're coming from an anthroprocentric perspective and behaviorialist paradigm, where natural processes are understood in the lens of humans and human actions ("The focus is on the efficient production of useful goods in ways that require minimal maintenance by letting other creatures do all the work for you"), and the reason someone does something is because it benefits themselves or other humans ("without that core benefit nobody could do it even if they wanted to and have a viable farm").The ethical principle of "Fair Share" isn't just about the yields the land owner has, but also the yields other inhabitants of an ecology have. For people who are motivated by stewardship, for example, humans obtaining benefits is not elevated into its own thing. As an example, some of the Native tribes would say something along the lines that when you plant, one is for the plants, one is for the animals, one is for the birds, one is for us. It is certainly not about maximizing production efficiencies for the benefit of humans alone.That motivation and attitude shapes the way someone views and experiences their life, and their place, and in turn shapes how we go about caring for land, caring for people, and fair share.I know I'm cheating here a bit. I'm using the work of Carol Sanford to identify world view and paradigm, and that way of thinking through these things are not spelled out in the original works of Mollison and Holmgren. Sanford's work on regenerative paradigms and living systems world view goes a long way towards sorting out the different ways people approach things in the permaculture community, and is generalizable more broadly than food systems.Regeneration is a characteristic exclusive to living systems. It's not something that can be approached from a world view that everything is a machine, or the paradigm that one can control behavior through incentives and disincentives. Only living systems can regenerate. It's the broader paradigm from which "your food (and other resources) produce themselves" comes from. Living systems are capable of growing and adapting on their own; they are nested -- so that is you and I, within larger living systems of family, community, organization, ecology. It is because of regeneration that "food and other resources produce themselves".My point in all of this is that there is a diversity of motivations and views, and the view that "without that core benefit nobody could do it even if they wanted to and have a viable farm" is not as universal as it sounds like. "The core reason to do permaculture is that your food (and other resources) produce themselves" might be your core reason, but it is not true it is the reason that everyone in the permaculture community applies permaculture.
    1. I'd love something similar to automatically crawl and index every site I visit. I'm forever losing stuff. I know I saw it but I can't remember where. reply parent chillpenguin () 1 hour ago on I use BrowserParrot for this. Works really well.https://www.browserparrot.com/ reply parent thinkmassive () 2 hours ago on ArchiveBox documents how to automatically archive links from your browser history:https://github.com/ArchiveBox/ArchiveBox/wiki/Usage#Import-l... reply parent mttjj () 5 hours ago on This is Mac only and I have no affiliation other than I like this developer but your request reminded me that he just launched this app: https://andadinosaur.com/launch-history-book reply parent akrymski () 1 hour ago on I use Google for this. It's really annoyingly good at finding previously visited pages. reply parent asselinpaul () 3 hours ago on https://heyday.xyz comes to mind reply parent fudged71 () 2 hours ago on Vortimo
    1. Starting in the 1990s, Stanford political science professor Shanto Iyengar exposed people to two kinds of TV news stories: wider-lens stories (which he called “thematic” and which focused on broader trends or systemic issues — like, say, the causes of poverty) and narrow-lens stories (which he labeled “episodic” and which focused on one individual or event — say, for example, one welfare mother or homeless man).Again and again, people who watched the narrow-lens stories on the welfare mother were more likely to blame individuals for poverty afterwards — even if the story of the welfare mother was compassionately rendered. By contrast, people who saw the wider-lens stories were more likely to blame government and society for the problems of poverty. The wider the lens, the wider the blame, in other words.In reality, most stories include both wide and narrow-lens moments; a feature on a welfare mother will still invariably include a few lines about the status of job-training programs or government spending. But as Iyengar showed in his book Is Anyone Responsible?, TV news segments are dominated by a narrow focus. As a result, TV news unintentionally lets politicians off the hook, Iyengar wrote, because of the framing of most stories. The narrow-lens nudges the public to hold individuals accountable for the ills of society — rather than corporate leaders or government officials. We don’t connect the dots.Great storytelling always zooms in on individual people or incidents; I don’t know many other ways to bring a complicated problem to life in ways that people will remember. But if journalists don’t then zoom out again — connecting the welfare mother or, say, the controversial sculpture to a larger problem — then the news media just feeds into a human bias. If we’re all focused on whatever small threat is right in front of us, it’s easy to miss the big catastrophe unfolding around us.

      I wonder if there were other differences between the stories? Lots of people are cowed by quantitative arguments, which are in short supply in individuals' stories.

    1. Maybe it’s time we talk about it?

      Yes, long overdue!

      Coming to terms with potential near term extinction of our species, and many others along with it, is a macro-level reflection of the personal and inescapable, existential crisis that all human, and other living beings have to contend with, our own personal, individual mortality. Our personal death can also be interpreted as an extinction event - all appearances are extinguished.

      The self-created eco-crisis, with accelerating degradation of nature cannot help but touch a nerve because it is now becoming a daily reminder of our collective vulnerability, Mortality salience of this scale can create enormous amounts of anxiety. We can no longer hide from our mortality when the news is blaring large scale changes every few weeks. It leaves us feeling helpless...just like we are at the time of our own personal death.

      In a world that is in denial of death, as pointed out by Ernest Becker in his 1973 Pulitzer-prize winning book of the same title, the signs of a climate system and biosphere in collapse is a frightening reminder of our own death.

      Straying from the natural wonderment each human being is born with, we already condition ourselves to live with an existential dread as Becker pointed out:

      "Man is literally split in two: he has an awareness of his own splendid uniqueness in that he sticks out of nature with a towering majesty, and yet he goes back into the ground a few feet in order to blindly and dumbly rot and disappear forever."

      Beckerian writer Glenn Hughes explores a way to authentically confront this dread, citing Socrates as an example. Three paragraphs from Hughes article point this out, citing Socrates as exemplary:

      "Now Becker doesn’t always emphasize this second possibility of authentic faith. One can get the impression from much of his work that any affirmation of enduring meaning is simply a denial of death and the embrace of a lie. But I believe the view expressed in the fifth chapter of The Denial of Death is his more nuanced and genuine position. And I think it will be worthwhile to develop his idea of a courageous breaking away from culturally-supported immortality systems by looking back in history to a character who many people have thought of as an epitome of a self-realized person, someone who neither accepts his culture’s standardized hero-systems, nor fears death: the philosopher Socrates."

      "Death is a mystery. Maybe it is annihilation. One simply can’t know otherwise. Socrates is psychologically open to his physical death and possible utter annihilation. But still this does not unnerve him. And if we pursue the question: why not?–we do not have to look far in Plato’s portrait of Socrates for some answers. Plato understood, and captured in his Dialogues, a crucial element in the shaping of Socrates’ character: his willingness to let the fact of death fully penetrate his consciousness. This experience of being fully open to death is so important to Socrates that he makes a point of using it to define his way of life, the life of a philosophos–a “lover of wisdom.” " "So we have come to the crucial point. The Socratic catharsis is a matter of letting death penetrate the self. It is the acceptance of the perishing of everything that will perish. In this acceptance a person imaginatively experiences the death of the body and the possibility of complete annihilation. This is “to ‘taste” death with the lips of your living body [so] that you … know emotionally that you are a creature who will die; “it is the passage into nothing” in which “a corner is turned within one.” And it is this very experience, and no other, that enables a person to act with genuine moral freedom and autonomy, guided by morals and not just attraction and impulses."

      https://ernestbecker.org/lecture-6-denial/

    2. It is now impossible for the world’s leaders to say that they “didn’t know” that this was going on, and that we didn’t have the power to prevent it all along. We scientists have been working hard, collecting evidence, writing reports, and presenting it all to the world’s leaders and the broader public. No one can honestly say that we haven’t been warning the world for decades.

      And therein lies the great mystery. How is it that with this specific way of knowing, we can still ignore the overwhelming science? It's not just a small minority either, but the majority of the elites. As research from Yale and other leading research institutions on climate communications have discovered, it is not so much a knowledge deficit problem, as it is a sociological / pyschological ingroup/outgroup conformity bias problem.

      This would suggest that the scientific community must rapidly pivot and place more resources on studying this important area to find the leverage points for penetrating conformity bias.

    1. Groups in arts education rail against the loss of music, dance, and art in schools and indicate that it's important to a balanced education.

      Why has no one embedded these learning tools, for yes they can be just that, into other spaces within classrooms? Indigenous educators over the millennia have done just this in passing on their societal and cultural knowledge. Why have we lost these teaching methods? Why don't we reintroduce them? How can classrooms and the tools within them become mnemonic media to assist both teachers and learners?

      Perhaps we need to bring back examples of how to do these things at the higher levels? I've seen excercises in my daughter's grade school classrooms that bring art and manipulatives into the classroom as a base level, but are they being done specifically for these mnemonic reasons?

      Michael Nielsen and Andy Matuschak have been working at creating a mnemonic medium for areas like quantum mechanics relying in part on spaced repetition. Why don't they go further and add in dance, movement, art, and music to aid in the process. This can be particularly useful for creating better neurodiverse outcomes as well. Education should be more multi-modal, more oral, and cease it's unending reliance on only literacy as it's sole tool.

      How and where can we create a set of example exercises at various grade levels (similar to rites of knowledge initiation in Indigenous cultures, for lack of specific Western language) that embed all of these methods

      Link to: - Ideas in The Extended Brain about movement, space, etc. - Nielsen/Matuschak mnemonic media work

    1. And to enable those operations, which range from counting to sorting and from modeling to visualizing, the data must be placed into some kind of category—if not always into a conceptual category like gender, then at the least into a computational category like Boolean

      To bring up an earlier point, while quantitatively recording data to be used computationally may be empowering, I want to reiterate how it can be harmful as well. It's unfortunate because the nature of how computers work and how everything must be categorized is an actual issue, considering the categories that are made were made by humans too--they will contain all the biases and issues that we have as well, we just shape it to our needs at that moment.

    1. While heultimately appointed a number of HRS veterans, he also hired many new people, some of whom hadworked for him in Illinois,

      I struggle with this type of hiring in general. Yes I agree it's important to get good people in the right positions, and if you know that to be true from past experience, great. However, it is a bit like nepotism which is honestly just annoying.

    1. IN SUMMARY:- we will have NO HOPE of release from the iniquities, paradoxes, inconsistencies and falsities of objective rationalisation until and unless we recognise the receptive influence of omnipresent space and the responsive flow of energetic information around every body. ONLY THEN will a truly compassionate and regenerative way of life become possible.

      To be AND not to be that is the answer contradictions exist within our symbolic framing but that's alright it's just language, wonderFULL language!

    1. first i think it's important to remember that net zero is a new phrase it's it's nothing we haven't had newton this language of net zero this framing of net zero is is something just appeared just in 00:11:54 the last few years if you look at the sr 1.5 report 2018 in the summary for policy makers then um it's mentioned 16 times if you look at the ar-5 the previous report from the ipcc and their synthesis report 00:12:06 for the summary for policy makers it's not mentioned once you look in the the committee on climate uk committee on climate change's sixth budget report and it's it's a long report 427 pages 00:12:18 it's on numerous times on every page it's somewhere between it's referred to somewhere between three thousand and five thousand times they use the expression net zero look at the previous fifth budget report from the committee on 00:12:31 climate change in 2015 it's not mentioned once now it is true to say that the language of net cumulative missions in various ways has been referred to if you like within the science but the appealing translation and the 00:12:44 ubiquitous use of net zero by everyone is a very new phenomena and one i think that we've taken on board unproblematically because it allows us to to basically um avoid near-term action on climate 00:12:57 change and we can hide all sorts behind it so it's important to recognize that net zero net zero 2050 net zero 20 20 45 for sweden firstly this is not based on the concept of a total carbon budget 00:13:10 and it's interesting note that the uk previously had legislation that was based on the total carbon budget for the uk as i mean i think the budget was too large but it was deemed to be an appropriate contribution to staying below 2 degrees centigrade but now 00:13:24 that's gone now we simply have this net zero 2050 framing so this whole language it moves the debate from what we need to do today which is what carbon budgets force us to 00:13:36 face it moves it off to some far-off point 2045 or 2050 which we have to think about that in which which policymakers in sweden and the uk will still be policymakers in 2045 and 50 they'll either be dead 00:13:49 or retired as indeed with the scientists that are behind a lot of this net zero language so it's in that sense it's we are passing that net zero is a is a generational passing of the challenge of the buck um to our children and our children's 00:14:02 children it's also worth bearing in mind that net zero typically assumes some sort of multi-layered form of substitution between different greenhouse gases so carbon dioxide for me thing between different sources 00:14:15 carbon dioxide from a car can be compared with agricultural fertilizer and nitrous oxide emissions but these these are very different things but across decades a flight carbon dioxide 00:14:27 from a flight we take today can be considered in relation to carbon capture in a tree that's planted in 2050 that's growing in 2070. this assumption within net zero that a ton is a ton is a ton regardless of different 00:14:40 chemistries different atmospheric lifetimes of the gases in the atmosphere and and different levels of certainty and indeed levels of risk and hugely different things this is this is incredibly dangerous and again it's another 00:14:52 it's another thing that makes net zero attractive and appealing in a machiavellian way because it allows us to hide all sorts of things behind this language of net zero the other thing about net zero is that 00:15:07 perhaps with no exceptions but typically anyway it relies on huge planetary scale carbon dioxide removal cdrs often well that's the latest acronym i'm sure there'll be another one out in the next year or two 00:15:20 um carbon dioxide removal captures two important elements first negative emission technologies nets as they're often referred to and second nature-based solutions um nbs so these two approaches one is sort of 00:15:32 using technology to remove carbon dioxide from the atmosphere and the other one is using various nature-based approaches like planting trees or peat bog restoration and things like this that are claimed to absorb carbon dioxide 00:15:45 and just to get a sense of the scale of negative emissions that's assumed in almost every single 1.5 and 2 degree scenario at the global level but indeed at national levels as well we're typically assuming hundreds of 00:15:57 billions of tons of carbon dioxide being absorbed from the atmosphere most of it is post 2050 and quite a lot of it is beyond 2100 again look at those dates who in the scientific community that's 00:16:09 promoting these who in the policy realm that's promoting these is going to be still at work working in 2015 and 2100 some of the early career researchers possibly some of the younger policymakers but most of us will 00:16:21 will say be dead or um or retired by them and just have another flavor if those numbers don't mean a lot to you what we're assuming here is that technologies that are today at best small pilot schemes will be 00:16:34 ramped up in virtually every single scenario to something that's that's akin to the current um global oil and gas industry that sort of size now that would be fine if it's one in ten scenarios or you know five and a 00:16:47 hundred scenarios but when virtually every scenario is doing that it demonstrates the deep level of systemic bias that we've got now that we've all bought into this language of net zero so it's not to outline my position on 00:16:59 carbon dioxide removal because it's often said that i'm opposed to it and that's simply wrong um i i would like just to see a well-funded research and development programs into negative emission technologies nature-based solutions and so forth 00:17:12 and potentially deploy them if they meet stringent sustainability criteria and i'll just reiterate that stringent sustainability criteria but we should mitigate we should cut our emissions today assuming that these carbon dioxide removal techniques of one 00:17:25 sort or another do not work at scale and another important factor to bear in mind here and there's a lot of double counting that gotham goes on here as far as i can tell anyway is that we're going to require some level of carbon 00:17:36 dioxide removal because there's going to be a lot of residual greenhouse gas emissions not you know not co2 principally methane and n2o nitroxites and fertilizer use um we're going to come from agriculture anyway if you're going to feed 9 billion 00:17:49 people now quite what those numbers are there's a lot of uncertainty but somewhere probably around 6 to 10 billion tons of carbon dioxide equivalent every single year so we'll have to find some way of compensating for the warming from feeding the world's population and certainly there are plenty of things we 00:18:01 can do with our food eating habits and with our agricultural practices but nonetheless it still looks like there will be a lot of emissions from the agricultural sector and therefore we need to have real zero emissions 00:18:14 from energy we cannot be using all of these other techniques nets mbs and so forth to allow us to carry on with our high energy use net zero has become if you like a policy 00:18:28 framework for all and some argue and there's been some question discussion in some of the um journalist papers around climate change recently saying well actually that's what it's one of its real strengths is it brings everyone together 00:18:40 but in my view it it's so vague that it seriously undermines the need for immediate and deep cuts and emissions so i can see some merit in a in an approach that does bring people together but if it sells everything out in that process then i think it's actually more 00:18:53 dangerous than it is of benefit and i think net zero very much falls into that category i just like to use the uk now as an example of why i come to that conclusion

      Suddenly the new term "Net Zero" was introduced into this IPCC report thousands of times. Kevin unpacks how misleading this concept could be, allowing business and governments to kick the can down the road and not make any real effort towards GHG reductions today. Procrastination that is deadly for our civilization.

      At time 15 minute, Kevin goes into Carbon Dioxide Removal (CDR) and Negative Emission Technologies

      (NET) which are an important part of the Net Zero concept. These are speculative technologies at best which today show no sign of scalability.

    1. it's really worth reading some of the things 00:18:00 that they're saying on climate change now and so what about 2 degrees C that's the 46th pathway that's the thousand Gigaton pathway the two degrees so you 00:18:13 look at the gap but between those two just an enormous that's where where no English edding we're all part of this and that's where we know we have to go from the science and that's where we keep telling other parts of the world begun to try to achieve the problem with 00:18:26 that and there's an engineer this is quite depressing in some respects is that this part at the beginning where we are now is too early for low-carbon supply you cannot build your way out of this with bits of engineering kit and 00:18:39 that is quite depressing because that leaves us with the social implications of what you have to do otherwise but I just want to test that assumption just think about this there's been a lot of discussion I don't know about within Iceland but in the UK quite a lot me 00:18:51 environmentalist have swapped over saying they think nuclear power is the answer or these one of the major answers to this and I'm I remain agnostic about nuclear power yeah it's very low carbon five to 15 grams of carbon dioxide per 00:19:03 kilowatt hour so it's it's similar to renewables and five to ten times lower than carbon capture and storage so nuclear power is very low carbon it has lots of other issues but it's a very low carbon but let's put a bit of 00:19:15 perspective on this we totally we consume in total about a hundred thousand ten watts hours of energy around the globe so just a very large amount of energy lots of energy for those of you I'm not familiar with these units global electricity consumption is 00:19:30 about 20,000 tarantella patelliday hours so 20% of lots of energy so that's our electricity nuclear provides about 11 a half percent of the electricity around the globe of what we consume of our 00:19:42 final energy consumption so that means nuclear provides about two-and-a-half percent of the global energy demand about two and a half percent that's from 435 nuclear power stations provide two 00:19:56 and a half percent of the world's energy demand if you wanted to provide 25% of the world's energy demand you'd probably need something in the region of three or four thousand new nuclear power stations to be built in the next 30 00:20:08 years three or four thousand new nuclear power stations to make a decent dent in our energy consumption and that assumes our energy consumptions remain static and it's not it's going up we're building 70 so just to put some sense 00:20:21 honest you hear this with every technology whether it's wind wave tidal CCS all these big bits of it technology these are going to solve the problem you cannot build them fast enough to get away from the fact that we're going to 00:20:34 blow our carbon budget and that's a really uncomfortable message because no one wants to hear that because the repercussions of that are that we have to reduce our energy demand so we have to reduce demand now now it is really 00:20:48 important the supply side I'm not saying it's not important it is essential but if we do not do something about the men we will not be able to hold to to probably even three degrees C and that's a global analysis and the iron would be 00:21:00 well we have signed up repeatedly on the basis of equity and when we say that we normally mean the poorer parts of the world would be allowed to we'll be able to peak their emissions later than we will be able to in the West that seems a 00:21:13 quite a fair thing that probably but no one would really argue I think against the idea of poor parts the world having a bit more time and space before they move off fossil fuels because there that links to their welfare to their improvements that use of energy now 00:21:27 let's imagine that the poor parts the world the non-oecd countries and I usually use the language of non annex 1 countries for those people who are familiar with that sort of IPCC language let's imagine that those parts of the 00:21:39 world including Indian China could peak their emissions by 2025 that is hugely challenging I think is just about doable if we show some examples in the West but I think it's just about past possible as 00:21:51 the emissions are going up significantly they could peak by 2025 before coming down and if we then started to get a reduction by say 2028 2029 2030 of 6 to 8 percent per annum which again is a 00:22:02 massive reduction rate that is a big challenge for poor parts of the world so I'm not letting them get away with anything here that's saying if they did all of that you can work out what carbon budget they would use up over the century and then you know what total carbon budget is for two degree 00:22:16 centigrade and you can say what's left for us the wealthy parts of the world that seems quite a fair way of looking at this and if you do it like that what's that mean for us that means we'd have to have and I'm redoing this it now 00:22:28 and I think it's really well above 10% because this is based on a paper in 2011 which was using data from 2009 to 10 so I think this number is probably been nearly 13 to 15 percent mark now but about 10 percent per annum reduction 00:22:40 rate in emissions year on year starting preferably yesterday that's a 40 percent reduction in our total emissions by 2018 just think their own lives could we reduce our emissions by 40 percent by 00:22:52 2018 I'm sure we could I'm sure we'll choose not to but sure we could do that but at 70 percent reduction by 2020 for 20-25 and basically would have to be pretty much zero carbon emissions not just from electricity from everything by 00:23:06 2030 or 2035 that sort of timeframe that just this that's just the simple blunt maths that comes out of the carbon budgets and very demanding reduction rates from poorer parts of the world now 00:23:19 these are radical emission reduction rates that we cannot you say you cannot build your way out or you have to do it with with how we consume our energy in the short term now that looks too difficult well what about four degrees six that's what you hear all the time that's too difficult so what about four 00:23:31 degrees C because actually the two degrees C we're heading towards is probably nearer three now anyway so I'm betting on your probabilities so let's think about four degrees C well what it gives you as a larger carbon budget and we all like that because it means I can 00:23:43 attend more fancy international conferences and we can come on going on rock climbing colleges in my case you know we can all count on doing than living the lives that we like so we quite like a larger carbon budget low rates of mitigation but what are the 00:23:54 impacts this is not my area so I'm taking some work here from the Hadley Centre in the UK who did some some analysis with the phone and Commonwealth Office but you're all probably familiar with these sorts of things and there's a range of these impacts that are out there a four degree C global average 00:24:07 means you're going to much larger averages on land because mostly over most of the planet is covered in oceans and they take longer to warm up but think during the heat waves what that might play out to mean so during times 00:24:18 when we're already under stress in our societies think of the European heat wave I don't know whether it got to Iceland or not and in 2003 well it was it was quite warm in the West Europe too warm it's probably much nicer 00:24:31 in Iceland and there were twenty to thirty thousand people died across Europe during that period now add eight degrees on top of that heat wave and it could be a longer heat wave and you start to think that our infrastructure start to break down the 00:24:45 cables that were used to bring power to our homes to our fridges to our water pumps those cables are underground and they're cooled by soil moisture as the soil moisture evaporates during a prolonged heatwave those cables cannot 00:24:56 carry as much power to our fridges and our water pumps so our fridges and water pumps can no longer work some of them will be now starting to break down so the food and our fridges will be perishing at the same time that our neighbors food is perishing so you live 00:25:08 in London eight million people three days of food in the whole city and it's got a heat wave and the food is anybody perishing in the fridges so you think you know bring the food from the ports but the similar problems might be happening in Europe and anyway the tarmac for the roads that we have in the 00:25:19 UK can't deal with those temperatures so it's melting so you can't bring the food up from the ports and the train lines that we put in place aren't designed for those temperatures and they're buckling so you can't bring the trains up so you've got 8 million people in London 00:25:31 you know in an advanced nation that is start to struggle with those sorts of temperature changes so even in industrialized countries you can imagine is playing out quite negatively a whole sequence of events not looking particulate 'iv in China look at the 00:25:44 building's they're putting up there and some of this Shanghai and Beijing and so forth they've got no thermal mass these buildings are not going to be good with high temperatures and the absolutely big increases there and in some parts of the states could be as high as 10 or 12 00:25:56 degrees temperature rises these are all a product of a 4 degree C average temperature

      We have to peak emissions in the next few years if we want to stay under 1.5 Deg C. This talk was given back in 2015 when IPCC was still setting its sights on 2 Deg C.

      This is a key finding for why supply side development cannot scale to solve the problem in the short term. It's impossible to scale rapidly enough. Only drastic demand side reduction can peak emissions and drop drastically in the next few years.

      And if we hit a 4 Deg C world, which is not out of the question as current Business As Usual estimates put us on track between 3 and 5 Deg C, Kevin Anderson cites some research about the way infrastructure systems in a city like London would break down

    1. I should note that the stories that this data allow us to tell are still centered on the enslaved peoples’ enslavement. From the data alone the enslaved still come accross as slaves, the most important thing about them is that they are property. This information alone cannot dismantle white supremacy or even really reframe our understanding of the slave trade

      I think this is a really important point and I’m glad you made it. What Enslaved does really well is demonstrate, as you said, the legal processes that made slavery permissible. We’re all familiar with bills of sale and estate sales and shipping records and all of these things that we think of as normal - but to see them in the context of human beings helps us come to terms with just how this happened.

      But you’re right, one of the limitations of this is that the records it shows only depict these people as property. I don’t think that’s the fault of the site, it makes sense given the data they have access to and it makes sense given the scope of the project, but it does really shape how we absorb the information on the site. Given that the website pages all look the same (name of person/place/thing, type of place or event, etc.) it’s easy I think to lose the humanity. The site helps show the enormity of the slave trade, but when you’re confronted with what seems like an endless number of records (588,000 for people alone) and all those records look pretty similar, your focus is shifted away from the fact that each of these were individual people who were treated as worse than livestock.

    1. Reviewer #1 (Public Review):

      This paper shows that a principled, interpretable model of auditory stimulus classification can not only capture behavioural data on which the model was trained but somewhat accurately predict behaviour for manipulated stimuli. This is a real achievement and gives an opportunity to use the model to probe potential underlying mechanisms. There are two main weaknesses. Firstly, the task is very simple: distinguishing between just two classes of stimuli. Both model and animals may be using shortcuts to solve the task, for example (this is suggested somewhat by Figure 8 which shows the guinea pig and model can both handle time-reversed stimuli). Secondly, the predictions of the model do not appear to be quite as strong as the abstract and text suggest.

      The model uses "maximally informative features" found by randomly initialising 1500 possible features and selecting the 20 most informative (in an information-theoretic sense). This is a really interesting approach to take compared to directly optimising some function to maximise performance at a task, or training a deep neural network. It is suggestive of a plausible biological approach and may serve to avoid overfitting the data. In a machine learning sense, it may be acting as a sort of regulariser to avoid overfitting and improve generalisation. The 'features' used are basically spectro-temporal patterns that are matched by sliding a cross-correlator over the signal and thresholding, which is straightforward and interpretable.

      It is surprising and impressive that the model is able to classify the manipulated stimuli at all. However, I would slightly take issue with the statement that they match behaviour "to a remarkable degree". R^2 values between model and behaviour are 0.444, 0.674, 0.028, 0.011, 0.723, 0.468. For example, in figure 5 the lower R^2 value comes out because the model is not able to use as short segments as the guinea pigs (which the authors comment on in the results and discussion). In figure 6A (speeding up and slowing down the stimuli), the model does worse than the guinea pigs for faster stimuli and better for slower stimuli, which doesn't qualitatively match (not commented on by the authors). The authors state that the poor match is "likely because of random fluctuations in behavior (e..g motivation) across conditions that are unrelated to stimulus parameters" but it's not clear why that would be the case for this experiment and not for others, and there is no evidence shown for it.

      In figure 11, the authors compare the results of training their model with all classes, versus training only with the classes used in the task, and show that with the latter performance is worse and matches the experiment less well. This is a very interesting point, but it could just be the case that there is insufficient training data.

    1. the limitations of Google’s definition show how we drain the world of texture when we strip out the subtleties.   I see a parallel in recorded music. When the original Phonograph cylinders were invented in the late 1800s, they were the earliest commercial way to record and reproduce sound. People could play music wherever they wanted for the first time. The LPs that followed them didn’t have the depth of 45s or 78s, but they stuck around because they were easier to maintain. Music has since moved away from cassettes, CDs and now MP3 recordings which reduce file sizes by stripping away 75-95 percent of the original audio — the parts that are theoretically beyond people’s hearing capabilities. Reflecting on this trend towards convenience, the musician David Byrne wrote: “It’s music in pill form, it delivers vitamins, it does the job, but something is missing. We are often offered, and gladly accept, convenient mediums that are ‘good enough’ rather than ones that are actually better.” Don’t get me wrong: I’m all for the convenience offered by efficiency gains, and I still remember how magical it felt to hold my blue 4GB iPod mini in the 5th grade, knowing that it could play 1,000 different songs. It’s the second order effects that concern me. One study found that in the past 50 years, musicians have restricted their pitch sequences and reduced the variety in pitch progressions. During that same time period, the majority of pop music has embraced the same 4/4 beat structure — meaning where there are four beats per bar and every new bar begins at a count of the first beat. It’s stuck because it’s the easiest signature to compose a song around. Fast and simple, just like the microwave. One writer called it “the largest scale homogenization of music in history.”

      standardisation for convenience, and also ease of maintenance is what determines adoption, not really quality

    2. This urge to microwavify the world isn’t limited to the food industry. In Technics and Civilization, the historian Lewis Mumford writes that our industrial mode of thinking has caused us to devalue the kind of intuitive knowledge that leads to beauty. He writes: “The qualitative was reduced to the subjective: the subjective was dismissed as unreal, and the unseen and unmeasurable non-existent… art, poetry, organic rhythm, fantasy — was deliberately eliminated.”  As Mumford observed almost a century ago, the world loses its soul when we place too much weight on the ideal of total quantification. By doing so, we stop valuing what we know to be true, but can’t articulate. Rituals lose their significance, possessions lose their meaning, and things are valued only for their apparent utility. To resist the totalizing, but ultimately short-sighted fingers of quantification, many cultures invented words to describe things that exist but can’t be defined. Chinese architecture follows the philosophy of Feng Shui, which describes the invisible — but very real — forces that bind the earth, the universe, and humanity together. Taoist philosophy understands “the thing that cannot be grasped” as a concept that can be internalized only through the actual experience of living.1 Moving westward, the French novelist Antoine de Saint-Exupéry said: “It is only with the heart that one can see rightly; what is essential is invisible to the eye.” And in Zen and the Art of Motorcycle Maintenance, Robert Pirsig describes how quality can’t be defined empirically because it transcends the limits of language. He insists that quality can only be explained with analogies, summarizing his ideas as such: “When analytic thought, the knife, is applied to experience, something is always killed in the process.” All these examples use different words to capture the same idea.

      we try too hard to quantify and 'make scientific' and there are some things that just can't be measured. it is not that all that you can't see is not believable.

      Also do we not have those words in english? haha instead we have quantity is a quality all on it's own?

    1. The empath and the activist regard art fundamentally as a delivery system for messages and awarenesses. They believe that the output of an artwork, its effect on audiences, can be controlled and predetermined.

      This is why people are tired of other movies and they loved Top Gun- because it's just fun

    1. So I think in a lot of ways, it's kind of as close as they can get to, like, Indian fast food while still being obviously part of American culture.

      This never occurred to me. When I read the headline for the article, I just assumed there wasn't any real connection between Taco Bell and South Asian culture. The idea that it's a connection to fast food in India while still remaining connected to American culture opens up my thinking about this.

    1. “A lot of the stories out there are just wrong,” he told me. “The political echo chamber has been massively overstated. Maybe it’s three to five per cent of people who are properly in an echo chamber.” Echo chambers, as hotboxes of confirmation bias, are counterproductive for democracy. But research indicates that most of us are actually exposed to a wider range of views on social media than we are in real life, where our social networks—in the original use of the term—are rarely heterogeneous.
    1. Space is infinite, unbounded.  This doesn’t imply that the infinity is all represented, just that the concept allows for indefinite extension.  Finite space can be derived by adding a bound to infinite space; this is similar to Spinoza’s approach to finitude in the Ethics. Space isn’t a property of things in themselves, it’s a property of phenomena, things as they relate to our intuition.  When formalizing mathematical space, points are assigned coordinates relative to the (0, 0) origin.  We always intuit objects relative to some origin, which may be near the eyes or head.  At the same time, space is necessary for objectivity; without space, there is no idea of external objects.

      Space is unbounded but restain by itself.

      External objects aren't defined by the origin. They are categorised when we incise a precise portion of the space and name it our system of study. Here we draw the lines and frontiers that define outside and inside.

    1. d. She puts the ideas together and tries to broker a deal for theconglomerate to acquire a radio network. At the end, she’s challenged to describehow she came up with the plan for the acquisition. It’s a telling scene. She has justbeen fired. On her way out of the building, with all her files and personal itemspacked in a box (a box just like mine!), she gets a chance to explain her thoughtprocess to the mogul:See? This is Forbes. It’s just your basic article about how you were lookingto expand into broadcasting. Right? Okay now. The same day—I’ll never forgetthis—I’m reading Page Six of the New York Post and there’s this item on BobbyStein, the radio talk show guy who does all those gross jokes about Ethiopiaand the Betty Ford Center. Well, anyway, he’s hosting this charity auction thatnight. Real bluebloods and won’t that be funny? Now I turn the page to Suzywho does the society stuff and there’s this picture of your daughter—see, nicepicture—and she’s helping to organize the charity ball. So I started to think:Trask, Radio, Trask, Radio.... So now here we are.He’s impressed and hires her on the spot. Forget the fairy-tale plot; as ademonstration of how to link A to B and come up with C, Working Girl is a primerin the art of scratching.

      The plot twist at the end of Working Girl (Twentieth Century Fox, 1988) turns on Tess McGill (Melanie Griffith) explaining her stroke of combinatorial creativity in coming up with a business pitch. Because she had juxtaposed several disparate ideas from the New York Post several pages from each other in a creative way, she got the job and Katharine Parker (Sigourney Weaver) is left embarrassed because she can't explain how she came up with a complicated combination of ideas.

      Tess McGill (portrayed by a big 80's haired Melanie Griffith) packing a brown banker's box with her office items and papers leaving her office and her job. Is this Tess McGill's zettelkasten in the movie Working Girl?

      Tess McGill has slips of newspaper with ideas on them and a physical box to put them in.

      slips with ideas+box=zettelkasten

      Bonus points because she links her ideas, right?!

    1. It’s easy to assume that once you’ve identified the user’s problem, your job is done. You’re ready to build, get to work, create that magical solution that will get rid of your user’s problem

      As a designer, this is so true. We likely to forget that sometimes problems are not a big deal.

      "It’s a big enough problem to complain about but not important enough for you to take the time to solve."

    1. Far from emancipatory, this map was one of the earliest instances of the practice of redlining, a term used to describe how banks rated the risk of granting loans to potential homeowners on the basis of neighborhood demographics (specifically race and ethnicity), rather than individual creditworthiness.

      I'd never heard of the term redlining before, but it just makes so much sense as a concept. Doing this is actually very popular back home in Bulgaria, in the context of the Roma population asking for loans/credits. People are biased to believe that Roma people are scammers and too poor to be able to maintain any creditworthiness, so they deny them loans purely based on their ethnicity. I'd never really thought about it, but I think it's such a discriminatory way of determining who's "worthy" and who's not.

    1. However I think the user experience of having to generate a new key every two years and tell all your friends about it is quite bad, and will make people wonder why they have to do this.

      I wonder if there needs to be a protocol affordance? I love that Mastodon has a "move" feature -- even though it's incomplete, its existence makes federated social media less like just-another-silo and more within the user's control. Maybe if there's a standard client link to replace a follow relationship with another? Groups of boards more generally? Hmm

    1. These monuments purposefully celebrate a fictional, sanitized Confederacy;ignoring the death, ignoring the enslavement, and the terror that it actually stoodfor

      No, it's just a reminder that's already reminded. Everyone knows what happened why have a monument that reminds African American people one of our lowest parts in history.

    1. A solid attempt at your first project! Here are a few pointers:

      1. I know that you've struggled quite a bit in getting this up and running. I think you just need more practice in JS. Try going back to any exercise that you might not have been super comfortable with (like say, functions in this project) and re-doing them on your own. This should help your understanding of javascript more!
      2. Maybe it's easier that you do your pseudocode before you attempt coding this out. It will help you map out what you want and need to be done. You might need to be fairly specific about it. So something like --check whether player win-- is not recommended, and something like --check if player has scissors and computer has paper, player win if condition met--- is more recommended.
      3. I like how you want to make it the most comfortable version, I can give you some pointers on how you can make that happen, and if you still want to experiment with this code, feel free to do so!
      4. You're very close to implementing the win/draw/lose percentage. What you can do is to add a counter on how many times you've played the game, and divide the win/draw/lose amt to that counter.
      5. for reverse game mode / korean sps - You can add more game states, or new win/lose checking for all the reversed / korean sps game states. Granted the korean sps is very hard to implement, only very few people in all of the basics batches successfully do it.
      6. you can also add a new game mode for computers! if you want to do so! It's basically just 2 random rolls and check it against one another.

      I know that you can do what needs to be done on the next project! It might be hard but keep working on it, if you don't understand a concept, do look at and re-do your pre-class/in-class exercises! I'm looking forward to see your project 2 submission!