10,000 Matching Annotations
  1. Last 7 days
    1. On 2019-10-14 07:14:20, user Jean-Lou Justine wrote:

      About "The first observations of virus-like particles in trematodes were reported by Jean Lou Justine and its team, studying parasites of mollusks and fishes (20, 21)."

      No trematode was involved here.

      One reference (21) is about a virus in a Monogenea - Microcotyle sp. (not Trematoda)<br /> One reference (20) is about a virus in Paravortex tapetis - Rhabdocoela (not Trematoda)

    1. On 2019-10-13 21:56:47, user Alexis Rohou wrote:

      I was asked to review this manuscript for a journal. Here are my review comments:

      Review by Alexis Rohou, Genentech<br /> 13-Oct-2019

      High quality, high-resolution cryoEM structures are becoming more achievable and may soon become routine for proteins and complexes amenable to purification and vitrification. Because of this, there is significant interest in cryoEM supporting structure-based drug discovery and design (SBDD), much like X-ray crystallography has enabled in last few decades. One crucial step in the support of SBDD is the reliable and accurate modeling of small molecule or other ligands given an experimental map. This is a non-trivial pursuit in most cases, because the experimental 3D map resolution in ligand binding sites is generally not sufficient to unambiguously determine the pose of ligands in the absence of any other information or prior knowledge.

      In this manuscript, Robertson and colleagues describe their current workflow for dealing with this challenge, and test it against a number of publicly-available maps and atomic models that include small molecule and peptide ligands. Their workflow, which they name "GemSpot", includes a number of improvements or adaptations over previously-available tools. In addition, the authors describe and illustrate a number of potential pitfalls in modeling ligands into cryoEM maps, including the potential to identify the wrong ligand poses. While the authors' implementation is within the framework of Shrodinger software, the concepts and improvements introduced, and the lessons learnt, will be of interest to a wider range of readers, regardless of the software packages they use.

      For these reasons, I would recommend prompt publication.

      I do have several reservations and comments, listed below. If they were addressed, I believe the manuscript would be better, but other than the first one, I would not consider addressing them would to be strictly necessary.

      (1) [must be addressed] I believe the authors are mistaken in stating that Bartesaghi et al., in their 2015 structure of beta-gal at 2.2 Å, ever used RELION for that work. I have looked again through that manuscript and as far as I can tell the main analysis was done using CTFTOMO and FREALIGN. I would encourage the authors to double-check. Perhaps they confused Bartesaghi et al 2015 with Kimanius et al 2016?

      (2) The word "pipeline" is used throughout, including in the title, to describe GemSpot. When I first read the manuscript, I was disappointed to slowly realize that in fact, there isn't much automation to GemSpot. To my mind, "pipeline" implies a degree of automation and a turnkey quality which is not evident here, and in fact this isn't really the point of the paper as I now understand it. Unless I am missing something, every step is triggered manually, with manually-adjusted parameters and expert decision making in between steps (e.g. selecting poses for further processing, deciding whether to attempt water placements, etc). If my reading is indeed correct, I wonder whether the authors may use "workflow" or "protocol" instead of "pipeline"? If I am incorrect, perhaps the authors could more strongly and specifically describe what parts of the process are automated and how turnkey they are. I do realize there is no universal and/or scientific definition of "pipeline", but if I felt that way reading it, I suggest that at least some other readers will do so.

      (3) "In addition, cryoEM reconstructions are vulnerable to spurious map features, currently evident with different software yielding noticeably different maps from the same dataset". Do the authors have specific examples in mind? If so, please cite - this would be valuable.

      (4) "This characteristic may arise from inaccuracies in image defocus estimation and correction of the contrast transfer function at high resolution, as well as variability in masking and weighting schemes employed in different software platforms for processing cryoEM data." Actually, I would have assumed that the main source of variability is the orientation (Euler angles) assignment.

      (5) "whereas a 1.9 Å map associated with the PDB:6CVM structure was obtained with CisTEM". This statement is not incorrect, but it leads to the wrong impression that cisTEM itself yielded the improvemnet to 1.9A when in fact much of the improvement was derived from improved treatment of per-micrograph dose damage, per-particle motions, etc. outside of cisTEM itself. Perhaps say "was obtained with cisTEM and new tools for improved dose and motion correction"?

      (6) Page 10 "where docking without the EM map yields predominately poses". I think the authors meant "predominantly"

      (7) Page 11 "Accordingly, although our JAWS calculations predicted tightly bound water molecules in these structures, no such water molecules were located in the deposited maps". I'm curious whether then including these water molecules yielded better, or even just different, ligand-bound poses for the ligands.

      (8) Fig 2, panels d/e: The spatial relationships between the views in d and e is really not obvious at first. This is not helped by the mesh not being show in panel d for the ligand. The authors should described better how they modified the map values or the surface renderings in preparing the figures. Also, perhaps add visual hints to help readers understand the relative orientations of the two views?

      (9) Fig 4: Please add labels to the figure panels to help readers locate these features in the maps / structures, e.g. identify side chains.

      (10) Fig S3: maybe you meant it's related to Figure 4?

    1. On 2019-10-13 14:09:51, user William James wrote:

      Reference (1) broderick, might better be to Metchnikoff, who proposed precisely this relationship! Broderick herself references Tauber's review (8) that itself cites the Master. happy to provide the citation to the original if you can't find it...

    1. On 2019-10-12 10:42:48, user Matt wrote:

      The paper refers to a Table S2, Supplementary File 1 and Supplementary File<br /> 2 which does not seem to be uploaded online? This could be useful to assess which samples are being examined.

    1. On 2019-10-11 20:27:54, user Stefan Barakat wrote:

      Thanks for the interest in our work. In case you are aware of patients with the same variant in UGP2, please feel free to contact us. We are aiming to include more cases upon revision of the manuscript that is currently under peer review.

    1. On 2019-10-11 16:05:36, user Bramwel Wanjala wrote:

      With more than 30 sweetpotato viruses reported,mostly it's the effect of SPCSV and SPFMV on yield that has been studied. Study gives an insight on how the commonly occuring Begomovirus impacts yield. More work needs tob done on combination with other viruses to help manage the effect of viruses on yield.

    1. On 2019-10-11 12:48:23, user Justin Perry wrote:

      Nice aggregation of data, but I'm a bit perplexed why more rigorous statistics were not performed, especially for Fig. 4. Wouldn't analyses such as classic regression modeling or ROC curve A.U.C.s be more informative? One could imagine two outcome variables allowing for different types of multiple regression, 1) received offer yes/no, and 2) number of offers. I also think it's worth correcting for number of submitted applications as well as number of interviews proportional to the number of offers, as just looking at number of offers is misleading given the fairly extreme distribution of applications submitted.

    1. On 2019-10-11 08:42:29, user Ximena Velasquez Pedrosa wrote:

      Dear Wu et al.

      Your paper is very interesting. Thank you for the pre-print version. However, I could not find the methods section. I would be very grateful if it will possible to share the methodology of the paper.

      Thank you<br /> Ximena Velasquez<br /> PhD student <br /> vximena@campus.technion.ac.il

    1. On 2019-10-10 18:24:42, user Eric Fournier wrote:

      Hello!

      First off, good work! You've got a very interesting paper.

      I was wondering, which proportion of your libraries (both Ribo-seq and RNA-seq) map to the viral genome vs the human genome?

    2. On 2019-08-27 11:18:10, user Anna Fogdell-Hahn wrote:

      Really nice work!<br /> I have two questions: <br /> 1) Are the similarities between HHV-6A and B genome still an average 90% and at their least similar gene, like for IE1, around 60%? Or do we need to update those figures now?<br /> 2) How host-cell dependent is your result? If you were to culture A and B in for example SupT1, would you expect the result be quite different?

    1. On 2019-10-10 15:22:57, user Peter-Bram 't Hoen wrote:

      Thank you very much for an impressive effort and a great resource. The GTEx project is a primary example of how rewarding collaborative research efforts are. The leading data analysts have demonstrated rigorousness in their analyses, with many different and complementary approaches. The senior authors have demonstrated great leadership.<br /> A few critical comments from my side to help improve the paper before publication in a peer-reviewed journal.<br /> 1. I miss a pan-tissue analysis of eQTLs, where the tissue-specific expression levels, and possibly even tissue:eQTL interaction effects, are taken into the model. This should have more power than analysis at the level of individual tissues, in particular for sQTLs, which are shown to be less tissue-specific. The interaction effects may reveal more tissue-specific eQTLs than currently identified.<br /> 2. I find the statement that “77% of the trans-eVariants that are also cis-eVariants appear to act through the cis-eQTL” a bit misleading, as around 50% of the trans-eVariants are not a cis-eQTL in the first place. Furthermore, it may be that mediation analysis on the trans-eVariants that are not meeting the cis-eQTL threshold, still show a significant mediation effect.<br /> 3. I do not understand why the correlation between cis-eQTL effect size and gene expression is almost as likely to be negative as positive. This would be rather logical when the authors have calculated this based on the effect size itself (and the allelic effect can be both negative and positive), but from the text and figure 6 I seem to appreciate that they have worked with the absolute effect size (although the paper does not formally state this). Can the authors provide plausible reasons for a negative correlation between the expression level and the cis-eQTL effect?

    1. On 2019-10-10 13:50:21, user Sebastian Pfeilmeier wrote:

      Endophytes are getting into the focus of microbiota research, as they are in close contact with the plant and both organisms are likely to influence each other. I was wondering whether it would be even more informative to compare not only "diversity" of different plant species and tissues, but also look for high abundant taxa that are commonly found as endophytes in plant species/tissue. After doing the effort of collecting all the rawdata from various studies, would it be possible to do this analysis?

    1. On 2019-10-09 20:54:16, user Yibing Shan wrote:

      The stated 120 Å receptor-receptor separation by the crystal FERM dimer model was a mistake. The concern about that model may have to do with the position of K279 of EpoR, which is only 5-residue away from the transmembrane helix but some 20 Å away from the membrane.

    2. On 2019-09-20 10:33:38, user Julie Tucker wrote:

      A great read - modelling as it should be done; informed by and informing biological insight and mutational studies. Would be good to see some statistical significance statistics on Figure 5. And are the suppressor mutants still responsive to cytokine stimulation? Perhaps this information is in the supplemental, which I have yet to find on biorxiv.

    3. On 2019-09-16 05:05:44, user Andrew Brooks wrote:

      Interesting article. I found it a little surprising that the context of reference 54 (Ferraro et al) is only mentioned once. It would be good if there was some further discussion of this article regarding this publication. In reference to this article the manuscript states "but this model suggests a separation of 120 Å or more between the receptor transmembrane helices". It would be great if this could be clarified further as I could see not the conclusion for the 120A. In Ferraro it al Fig 6b shows the EPOR Trp283 residues to be 44A and the Trp872 residues of LEPR to be 45A apart. Therefore I could not see the basis for a separation of 120 A for the transmembrane helices based on the data presented by Ferraro et al.

    1. On 2019-10-09 12:02:39, user ChrisB wrote:

      Nice looking paper. It's a pity that the only resources provided so far use hg19 though. It would be great if this could be used with the current version of the genome. Looking forward to updates

    1. On 2019-10-08 19:06:57, user QuiPrimusAbOris wrote:

      Such clone tracking tools will become more and more important with the spread of single-cell transcriptomics and accessibility of more sophisticated cell cultures, including organ-on-chip. It will bring snap-shot analyses, such as scRNAseq, into the realm cell fate histories tracking. There are many similar tools in the pipelines of academic and industrial labs. Surprising that there is so little discussion (and comparison) of prior similar art. Such a discussion would have been very useful.

    2. On 2019-10-08 18:58:57, user Sui Huang wrote:

      Interesting tool. This is not at all just about rare clones! It appropriately applied, it will offer a new vista onto cancer cell population dynamics and challenge existing paradigm of clonal expansion driven by driver mutations. Not sure if the authors had that in mind, but Amy Brock and her coworkers had previously designed a similar tool with such a vision in mind. For instance see: ACS Synth. Biol.20187102468-2474.

    3. On 2019-09-09 20:07:12, user Amy Brock wrote:

      The authors should cite the following publication from 2018, describing the development of a very, very similar method of rare clone retrieval from heterogeneous cell populations, using a library of CRISPR sgRNA barcodes. (ACS Synth. Biol.2018, 7:10, 2468-2474, https://pubs.acs.org/doi/10...

    1. On 2019-10-07 12:09:51, user Antonio Tugores wrote:

      Very Interesting. Thank you. We have now more patients with this mutation and indeed segregates with the disease, so we agree in that it is a mutation. I see that alternative splicing at this exon seems to be a "natural thing": figure 1 shows that both 5-6-7 and 5-7 alt splice forms coexist in a wt (1934T) setting. Is that right?

    1. On 2019-10-05 15:54:37, user Dr. David Ludwig wrote:

      BMJ STUDY CO-AUTHOR RESPONDS

      In the multiple versions of their preprint here, and in a final version in International Journal of Obesity, Hall & Guo criticize our BMJ study that showed higher energy expenditure on low- vs high-carbohydrate diets.

      We now respond in full in the same journal (full text linked here):<br /> https://www.nature.com/articles/s41366-019-0466-1

      We show that all these criticisms are fundamentally misleading or simply wrong. Specifically, we consider why:

      1. The post-weight-loss baseline is the most appropriate for studying metabolism during weight-loss-maintenance.

      2. The change in our registry was proper.

      3. Non-adherence would not plausibly account for our findings, based on several sensitivity analyses.

      In summary, there is substantial support for the carbohydrate-insulin model of obesity, and assertions to have disproven ("falsified") the model are without merit. At the same time, admittedly, neither is the model proven. Pending additional high quality studies with complementary design, scientist on both sides of the debate would do well to avoid premature conclusions.

    1. On 2019-10-05 06:19:44, user luan ngo wrote:

      I'm sorry I doubt that I fully understand the article due to my lack of knowledge of the field and my native tongue is not English. May I ask (in plain language) if this new research gives more hope to the growth plates regeneration and adult height increase in the foreseeable future?

    1. On 2019-10-03 19:11:12, user Simon Moore wrote:

      Can verify this is a nice E. coli cell-free protocol modification as results repeatable - two new MRes students in my lab got it working first time

    1. On 2019-10-03 16:03:33, user Glycan Boy wrote:

      My immediate reaction to something so novel is that it could be due to contamination, and in this case, the contaminant could be N-glycosylated peptides. Although the paper claims that the "glycan-RNA linkage was not sensitive to stringent protocols to separate RNA from lipids and proteins including organic phase separation, proteinase K treatment and silica-based RNA<br /> purification", they did not directly address this obvious issue by providing explicit evidence for lack of contamination from multiple, orthogonal, fit for purpose methods. Maybe they will when it's actually published, but if I were the reviewer, this would be the major issue that has to be addressed.

    1. On 2019-10-03 15:00:03, user Marcos Escosa wrote:

      Good morning,<br /> High-grade glioma cells consume mainly glucose and cannot compensate for glucose restriction. Apoptosis may potentially occur under carbohydrate restriction by a ketogenic diet (KD).<br /> The use of KD as adjuvant to standard treatment, with chemoradiation after first surgery, is feasible and safe in patients with glioblastoma multiforme (GBM). The value of these studies should be regarded in light of upcoming metabolic therapy trials which will be performed in several types of cancer. Specifically, in GBM, there are three studies ongoing.<br /> The median survival duration of patients with GBM is 15 months after multimodal therapy combining surgery, radiotherapy and chemotherapy. As high-grade glioma cells consume mainly glucose, dietary carbohydrate restriction has been suggested as a possible therapeutic strategy to improve the survival duration. In recent in vitro and in vivo studies, cancer growth was inhibited by the ketosis and increased lipolysis induced by low-carbohydrate diets. It seems, therefore, that GBM cells do not compensate for glucose restriction, whereas normal brain cells do so by metabolizing ketone bodies. Apoptosis may potentially occur under carbohydrate restriction. An extremely carbohydrate-restricted diet, the ketogenic diet (KD; high-fat, low-carbohydrate diet), could be of interest because it mimics the metabolic response to starvation when ketones become the main fuel for the brain. Although low-carbohydrate intake alone has been found effective on survival in the treatment of GBM in several animal models and in vitro studies, combining current therapies with KD was even more effective.<br /> Dr. Marcos Escosa

    1. On 2019-10-03 13:28:33, user Anton Larsson wrote:

      Hello! I can't seem to find the supplementary information anywhere (the supplementary figures and tables). Can you please provide them? Thanks!

      Anton

    1. On 2019-10-03 08:38:43, user HaitinLab wrote:

      Happy to share our new manuscript, now available on bioRxiv! Project led by Ariel Ben-Bassat: Structure of KCNH2 cyclic nucleotide-binding homology domain reveals a functionally vital salt-bridge.

    1. On 2019-10-03 00:58:21, user ppgardne wrote:

      I reviewed this article for another journal, it has since been published without addressing any of the significant concerns I raised regarding their method. I am also one of the authors that first proposed the RAFS metric. There are serious issues with how this measure has been applied, in addition to those shown by Professor Eddy, that I detail below.

      Tavares et al present an analysis of covariation measures in<br /> taxonomically restricted long non-coding RNAs. This is an important<br /> research area as most covariation analysis of ncRNAs is carried at in<br /> very deep alignments which contain ample amounts of sequence<br /> divergence, the analysis of which allows for confident estimations of<br /> conserved RNA structures.

      In principle, genome variation in structured regions from within a<br /> population, or between closely related species should also show<br /> signatures of negative selection acting upon regions requiring RNA<br /> structure for function.

      Unfortunately, this paper contains numerous fundamental scientific<br /> problems that fail to address the underlying question. Scientific<br /> questions should be falsifiable. I.e. experiments should be designed<br /> to exclude a hypothesis, yet the authors have instead designed an<br /> analysis to prove their favoured hypothesis.

      There are several indications that the authors consider more basepairs<br /> to be "better" throughout the manuscript, leading the reader to<br /> conclude that the authors are not overly concerned with the potential<br /> for false findings with their analysis. As more sequence information<br /> becomes available this should be of considerable concern to all<br /> researchers in this field.

      1. The problems begin with the dataset selection. Transfer RNA, 5S,<br /> RNase P, U2, U5, 7SK and SSU are sensible choices for studying<br /> covariation in mammalian species. However, Aphthovirus and SAM-I were<br /> also included, a viral and a bacterial family without explanation.

      2. Many of the test datasets (e.g. tRNA & U2) are known to be<br /> frequently pseudogenised. Did the authors take any steps to exclude<br /> pseudogenes from the Rfam alignments?

      3. Sliding windows (of 50-500 nts) and step sizes of 25 or 10 were<br /> used. However, no corrections (or discussion) of multiple testing<br /> were made. E-values can in principle be converted into P-values, these<br /> can in turn be corrected for the number of times base-pairs have been<br /> tested for signal with the sliding window methodology.

      4. A selection of the many R-scape parameters were trialled, with one<br /> parameter setting apparently returning a high number of "significant"<br /> basepairs. This approach appears to be a cherry-picking exercise.

      As independently pointed out in the comments to the Tavares et al<br /> pre-print on bioRxiv, the APC-RAFS parameter setting is highly<br /> problematic. In cases where 3 neighbouring columns with 100%<br /> conservation are matched by another 3 100% conserved columns that<br /> happen to be Watson-Crick or G:U basepairs (e.g. GUU and CAA columns),<br /> then a significant result is guaranteed with the APC-RAFS measure.

      The RAFS metric itself is defined as "0" in this situation (i.e. no<br /> variation). However, the "average product correction" gives it a<br /> non-zero value. This is then compared to a null-distribution generated<br /> by randomised alignments (using a row and column shuffling proceedure)<br /> that preserves the phylogenetic and block-like structure of the<br /> original alignment. These null values are all much lower than 0, hence<br /> a "significant" E-value results. Especially if multiple testing is<br /> employed.

      The row/column shuffling proceedure is the wrong null distribution for<br /> the RAFS measure. This is analogous to the problem pointed out by<br /> Workman and Krogh (1999) with the Seffens and Digby (1999) study. The<br /> RAFS measure is dependent upon neighbouring nucleotides, therefore the<br /> alignment randomisation strategy should try to preserve the<br /> tri-nucleotide frequencies in the alignment. Some attempts have been<br /> made to preserve di-nucleotide frequencies in alignments<br /> (e.g. Multiperm by Anandam et al and SISSIz by Gesell and<br /> Washietl). But preserving tri-nucleotide (and possibly di-nucleotide)<br /> frequencies in alignments may not provide sufficient combinations to<br /> ensure a properly randomised null.

      1. The inclusion of tRNAs from the Rfam seed is problematic. This<br /> alignment includes different tRNA isotypes that diverged prior to the<br /> appearance of the last universal common-ancestor. Therefore, a large<br /> evolutionary distance separates these. Yet the analysis is conducted<br /> as if these diverged recently.

      2. This sentence is unjustified "Given that R-scape uses the entire<br /> length of an RNA sequence for analysis, it is possible that the<br /> presence of regions with poorly defined structure negatively impacts<br /> the ability of R-scape to identify structural conservation in the<br /> absence of a consensus annotation". The covariation metrics themselves<br /> are not dependent on whether or not flanking sequence is included. The<br /> significance of the result may be lower with a longer sequence, due to<br /> a larger null distribution. This is analogous to homology search<br /> (e.g. BLAST) with a small vs a large sequence database. Sequence<br /> similarity is unchanged by the size of the database, by the E-values<br /> are larger with a larger database by definition.

      3. More problematic phrasing: "A windowing approach improves R-scape’s<br /> performance on long RNA alignments" and "indicating that the R-scape<br /> default parameters work better on short alignments" -- the windowing<br /> increases the number of times columns i and j are compared, therefore<br /> increasing the chance of significant results (and for<br /> false-positives).

      4. Given the parameter settings it is suprising only one "covarying"<br /> was found in RepA, this sounds more like a likely to be a<br /> false-positive. I note that the alignment has not been provided, so we<br /> cannot verify this statement either.

      5. The references 18-20 largely do not use RAFS, they largely use<br /> RAF. None of them use APC-RAFS.

      6. ROC plots (or the ROC-like alternative, Sensitivity vs PPV) should<br /> be used rather than Supp. Figure. 3 plots.

      7. The comparison of APC-GT and APC-RAFS largely confirms the results<br /> supplied in the Rivas et al (2017) supplementary materials. APC-RAFS<br /> has high sensitivity, but lower PPV than APC-GT. Given the difficulty<br /> of generating an appropriate null-distribution for APC-RAFS (as<br /> outlined above) I believe Rivas et al. are 100% correct in favouring<br /> APC-GT as for default R-scape use.

      8. The alignments (in plain text format) have not been made available<br /> with this analysis. Without these, none of the results are<br /> verifiable. ALL the alignments should have been made available for<br /> further verification of the analysis.

    1. On 2019-10-02 21:53:50, user Marcos wrote:

      How do you know the effect is the high fat diet itself, and not the <br /> calorie density of food? Equalizing the calorie density with fiber, may<br /> result in highly different results, despite the differences in the <br /> percentage of fat in the diet.

      https://mekineer.com/health...

    1. On 2019-10-02 14:19:18, user Tom & Tom wrote:

      This paper was submitted to a journal and was rejected with the following main concerns (paraphrased here by the manuscript authors):

      1) The paper lacks details on the custom microphone used, and the testing protocol, making reproduciblity of our experiments impossible.<br /> 2) The paper should have included a more extensive assessment of the drones in real-world data collection scenarios to demonstrate their effectiveness<br /> 3) More detail on the drones' construction is needed so that others can reproduce them<br /> 4) More detail is required on the exact modifications made to reduce noise and quantification of the impact this had on signal to noise ratio.<br /> 5) More detail is needed for the real-world tests, including flight duration, adherence to regulation, calls per unit time, and comparison to static or 'on-foot' surveys of the same location.

    1. On 2019-10-02 11:47:37, user Danielle Kurtin wrote:

      Hello,<br /> Thank you for preprinting this paper; it's been useful in the literature review I'm conducting for the start of my PhD. <br /> I noticed a few types in the manuscript. For example, the first sentence of the introduction begins as "Despite most neuroimaging studies still tend to treat human brain features as stable and homogeneous characteristics within a group, it is important to highlight that, in contrast, individual variability may play a relevant role in this context [1] [2]." Perhaps the following may be more correct: "Most neuroimaging studies tend to treat human brain features as stable and homogeneous group characteristics; however, it is important to highlight that individual variability may play a relevant role in this context [1] [2]."<br /> Let me know what you think, and thank you again!<br /> Cheers,<br /> Danielle

    1. On 2019-10-02 11:43:59, user Ken Smith wrote:

      It is great to see this out, as we had discussed it in February after you had seen our description in BioRxiv last year (https://doi.org/10.1101/499... of a similar PTPN2 haploinsufficiency associated with PID. It is interesting that you have also linked this to GWAS – validating the approach we used to discover our association. I hope there is room to cite this in your upcoming paper.

    1. On 2019-10-02 00:00:24, user Bradley wrote:

      Interesting article. I have a few questions: Does CCN not form a C-mannosylation as with other TSP1 domains, and if so could this contribute to the differences from the canonical TSP1 fold observed? How certain can you be that the refold has not shuffled the disulfides? If I were reviewing this paper, that's what I would want to know as far as validation. <br /> Any comment on why MAD was performed over MR? Maybe mention in methods?

    1. On 2019-10-01 19:07:39, user Robert Flight wrote:

      OK, just to be clear, when applying the glm-pca, one should use raw non-normalized data? I'm thinking of trying this with some metabolomics data that suffer from similar noise issues (Poisson or proportional type error, lots of zeros), and we usually normalize by some type of global abundance measure and log-transform before PCA.

    1. On 2019-10-01 16:21:31, user James Albert wrote:

      This is a fascinating study in several regards. One thing that caught my eye is the big red dispersal vector in Fig. 5 from the Amazon to Atlantic Forest at about 10 Ma. Although paleogeography is not discussed in this paper, this dispersal event is coeval with the origin of the modern transcontinental Amazon river, an epochal event that allowed the diverse biota of the Western Amazon to colonize the Eastern Amazon and it's major tributaries on the Brazilian Shield and Dry Diagonal.

      See e,g. Albert, J.S., Val, P. and Hoorn, C., 2018. The changing course of the Amazon River in the Neogene: center stage for Neotropical diversification. Neotropical Ichthyology, 16(3): e180033[1]

    1. On 2019-09-30 16:07:28, user Soham Mukhopadhyay wrote:

      Very interesting work! Have you tried chemically inducing the NLRs in Col-0 before infection or silencing those in Bur-0 to establish direct involvement?

    1. On 2019-09-30 14:43:15, user Alpina Begossi wrote:

      This is a study under revision; some explanations are being included, especially regarding the effort to catch groupers, that is the same since 2008; this study will have also its title changed (taking out the CS and LK) and focusing directly in the grouper year comparisons. The new title should be:<br /> "A sustainable fishing of dusky grouper (Epinephelus marginatus) in the small-scaler fishery of Copacabana, Rio de Janeiro, Brazil". <br /> Alpina Begossi September 30, 2019.

    1. On 2019-09-30 05:17:29, user Shapin wrote:

      According to the title you are evaluating the Road Traffic Delays (RTDs) influence on Musculoskeletal Health Complaints (MHCs). But reading the article for me it’s evaluating the commuting time’s influence on the MHCs. You evaluated subjectively by asking participants that whether they experience traffic congestion (Yes/No) But do that really explain RTDs? It’s a subjective issue. For many people in Dhaka city spending time in Dhaka city might have become a regular normal issue until they experience real bad traffic congestion. You can see from your study almost half of the sample from the study are commuting office within 6 km but still it’s taking them more than 30 minutes to commute to office for almost 50% of them still they explain it to be not experiencing traffic congestion (45.1%). Have checked what should be their actual travel time in Dhaka without traffic congestion? As far as I understand it is better to explain as commuting times/ distance influence rather than RTDs. Or else you also show the actual time used versus ideal (hypothetical) time (calculating average allowed traffic speed in Dhaka City with distance. And then claim traffic delays.

    1. On 2019-09-29 01:31:54, user Giulio Formenti wrote:

      The content of the article is very interesting. However, it's unclear why the Authors used an unpublished draft genome rather than the high-quality barn swallow genome published for the barn swallow (https://academic.oup.com/gi.... In my experience, using high-quality genome assemblies provides much better (and sometimes different) results in resequencing experiments, as those described in this paper.

    1. On 2019-09-27 23:09:19, user Em wrote:

      Hello, is this a preprint of the article "Mapping Local and Global Liquid Phase Behavior in Living Cells Using Photo-Oligomerizable Seeds", which was published in Cell (https://doi.org/10.1016/j.c... I was curious because the titles are similar, albeit slightly different, and the author lists are almost identical, except there's an additional author listed on the published article in Cell - Jose L. Avalos - who was not listed on this bioArxiv article. But yes, I just wanted to know whether this the preprint of "Mapping Local and Global Liquid Phase Behavior in Living Cells Using Photo-Oligomerizable Seeds", as well as whether Jose L. Avalos contributed to the preprint listed here on bioArxiv. Many thanks for your contributions and for making this content freely and publicly available!

    1. On 2019-09-27 16:25:30, user disqus_K7Cdvz97Tw wrote:

      Overview and Feedback: The purpose of this experiment was to compare nutritional quality and forage yield of organic winter wheat and rye when applied to grazing conditions. According to Philips et al., winter rye forage yield was greater than that of winter wheat but rye yield began to decrease as time progressed. Winter wheat had a greater crude protein content and alpha-linolenic acid concentration decreased in both forages, although concentrations decreased more so in winter wheat. Based on these results, it is suggested that producers should begin grazing early in the spring so as to provide cattle with forages that are more digestible and nutritious, particularly in the realm of fatty acid incorporation. While these ideas might not necessarily be novel in a conventional system, the application of these principles to organic or grass-fed systems could, ideally, fill the ever-widening knowledge gap regarding organic cattle production as transition to these systems is outpacing current research. The ideas that are the foundation of this manuscript are solid and future research projects could be further developed from these ideas (i.e., fatty acid content of forages utilized for fall grazing systems, alternative forage nutrient screening, etc.).

      Major Comments<br /> The focus of the manuscript is unclear (i.e., does it target forage fatty acid composition, temporal data, or animal nutrition?), and therefore, it was difficult to follow. It would be in the best interest of the authors to focus on one aspect and devote more detail to that specific topic. For example, consider removing the animal component of this study and focusing on the effect of temporal relationships with winter forages on nutritional quality. This component was one of the most confusing aspects of this manuscript as there was hardly any mention of an animal study (and particularly results) in the abstract. If the decision is made to keep the animal component as part of the manuscript, major clarifications and revisions need to be made.<br /> Consider removing all data and information related to the fatty acid composition portion of the manuscript. It currently detracts from the manuscript as a whole because the statistical analysis, figures, tables, and general information are disjointed. <br /> Title<br /> • Does not flow well and is not truly representative of what is presented within the paper; The manuscript does discuss forage yield, quality, and fatty acid content, but date is also treated as a major variable. <br /> • Remove scientific names from the title and consider making it shorter<br /> • Keywords are missing; consider adding three to six relevant words.<br /> Abstract<br /> • There needs to be some commentary or statements linking sample collection to statistics. <br /> • Include more information about the materials and methods.<br /> • It is stated that the objective of this study was to assess yield, nutritional quality, and fatty acid composition of winter forages. Fatty acid composition is a subdivision of nutritional quality and can be removed.<br /> • The abstract reads more like a brief results section and is too long. Suggest to remove the p-values and shorten the results.<br /> • Based on the results presented, the conclusions could be considered an overreach. <br /> • Introduction: Consider re-ordering the introduction to create a better “funnel” of information. The elements for making this a strong, fluid introduction are there but the order of topics does not flow well and there is a strong disconnect that makes it difficult to follow.<br /> • Please include the hypothesis.<br /> Materials and Methods: Please include more detailed information (questions included in minor comments) and more background. <br /> • L 150 - 151: Fatty acid content should be calculated by forage DM.<br /> • L 153 – 159: Please include important information pertaining to the statistical analysis of the fatty acids. Please include p-value designation and explain how were models built? <br /> General Results and Discussion: Much of this section is written as a description of Tables 2 and 4. Suggest to keep the tables or the descriptions, but not both. <br /> • L 162 - 173: Please provide examples from previous studies. Please note, while meta-analyses are great resources, they cannot be used as a study per se to refer back to when comparing results.<br /> • L 237 – 247; 262- 263; 275 – 276; 287 - 288: Suggest not make hasty recommendations or conclusions based on the current numbers, especially when there is no supporting literature cited. The paragraphs regarding minerals are rather lengthy. Consider to remove or to condense to a table or figure to simplify the information. <br /> L 358 – 382: This section does not contribute to the overall results and discussion. Please consider breaking this paragraph up and placing relevant sources and discussion material in other areas of the results and discussion.<br /> Minor comments<br /> • L 2 – 4: This is a direct repeat of the title. Please reword or rephrase. <br /> • L 5: Please be consistent with numbering.<br /> • L 8 – 11: The statistical statement needs to be separated or re-written.<br /> • L 20: 2.49 times could be rounded up to 2.5 and should be stated in another way.<br /> • L 25 - Remove citation 1 or place in appropriate location.<br /> Headings and Subheadings: <br /> • L 73 -77: Combine with the animal information provided in L 92 -95.<br /> • L 73 – 74: Please indicate whether the conventional and organic cattle were kept separate, and whether the land was managed organically? <br /> • L 77 – 82: Please remove these lines.<br /> • L 83 – 90: Please add additional pertinent information (e.g., what forages were planted?; how, when, and where were they planted?; soil type and consistency?).<br /> • L 88 -90: Please rephrase (the first part of this sentence is difficult to follow).<br /> • L 92 – 93: Please include average animal weight (if animal portion is kept)?<br /> • L 94 – 95: Please reword and include how animals were cared for.<br /> • L 97: Suggest to add a table or figure with the organic TMR diet composition (unless the authors decide to remove the entire animal portion entirely)<br /> • L 107: Please justify rotating paddocks every 3 – 4 days (rather than a more intensive -or less intensive- rotation).<br /> • L 115: Please move data regarding weather results to the results section.<br /> • L 125 – 128: Please include additional information (i.e., where were samples collected from and growing conditions/ locations).<br /> • L 132 – 134: Please move to the statistical analysis section.<br /> • L 136: This sentence needs clarification. One sample of what? A sample of forage from each paddock? How was it determined that essentially an n = 1 from each group within paddock was enough for statistical relevance?<br /> • L 147: Please include the in vitro procedures that were used to collect TTNDFD.<br /> • L 157: Change standard deviation to SEM.<br /> • L 153: Remove citation number 20. <br /> • L 222 – 224: Consider to explain the reason for why this may have occurred.<br /> • L 292 – 295: See comments regarding Table 3. If the author decides to keep the results as is, please consider adding literature in support of, rebutting, or explaining the data presented.<br /> • L 310 – 320; 324 -331; 337 – 334; 349 -351: These paragraphs appear unnecessary. Suggest to either remove them or associated tables and provide literature that backs up and refutes the observations reported.<br /> Tables and Figures<br /> • Please revise the significant digits for all tables and figures for consistency.<br /> • Please replace blank spaces in tables for non-significance with a dash or NS.<br /> • Table 1: Reformat with calculated SEM or SD.<br /> • Table 2: Reformat with a new title. Some data points are significantly different but are they biologically significant?<br /> • Table 3 does not stand on its own very well. Consider merging with Table 4 if kept.<br /> • Table 4: Are negative amounts of FA something that can be calculated? Is this correct? Is the lack of significant differences for stearic and behenic acid correct?<br /> o Consider removing Tables 3 and 4 or correcting them.<br /> • There are too many figures. Some could be combined and others could be removed.

    1. On 2019-09-27 00:18:55, user Fraser Lab wrote:

      The major goal of this paper is to introduce a modern energy function (AMBER) into the very popular and powerful PHENIX software for macromolecular structural biology. The history of using an energy function in crystallography in particular is long - and more recent results suggest that it mostly helps with geometry but doesn’t give “breakthrough” results in terms of R-free improvements (see earlier work by Fenn and Schnieders, anecdotal examples from CNS, etc). This is paralleled here, where they demonstrate improvements in Ramachandran, rotamer, and clash statistics, but fare no better, or even a little bit worse, in R-free.

      This work contains a ton of under-the-hood linking of two foundational codebases in structural biology (AMBER and PHENIX/cctbx). We are most excited about the future applications to ensemble refinement, simulated annealing, and real space refinement, but publication about the process of tying them together and the geometry improvements demonstrated in phenix.refine is timely.

      There are several matters that could be clarified

      * What is the licensing: we infer that no amber license is needed, but this is unclear<br /> * More detail would be useful around small molecule parameterization. It wasn't so clear how they were handling small molecules if at all in this implementation. Are they allowing GAFF parameters to be generated on the fly for the small molecules? As described briefly around line 136, how many work and how many fail? What trends can be drawn out here, e.g. Are there ligands and/or proteins that you would recommend not using Amber Refinement with? It promises an AmberPrep paper in the future, but a bit more detail would be helpful here. This could be a major use case.<br /> * Following on above: please add a flowchart and/or table identifying how and why PDBs dropped out of your analysis (ligands, other issues,)<br /> * Is the data available for the 22,000 proteins they refined? It should be deposited on some repository (Dryad or Zenodo or NIH Figshare)<br /> * In the results, you state that Phenix-Amber are more likely to exhibit electrostatic interactions. Other than the increase in Hydrogen bonds can you quantify this? It seems like it might create many salt bridges or h-bonds along the surface for residues (or to waters) with very weak density support? <br /> * In the conclusion, you state that Amber refinement may take more cycles to converge completely. Can you comment on average how many more cycles Amber refinement tends to take?<br /> * We don’t understand how water, bulk solvent and the boundaries between them are treated with Amber refinement. This seems difficult for pure minimization and extremely difficult/impossible when simulated annealing or dynamics are used.<br /> * How much was minimization vs. simulated annealing was used (line 116)?<br /> * Is weight optimization just for CDL-EH or for Amber too (line 146). Do I understand line 182 to mean that a lower weight (outside of the range tested by default) for Amber might have produced better results? <br /> * related: It would be good to have the inputs specifying which non-default parameters were used with both the CDL and Amber refinement in the supplement or in the methods<br /> * We are really confused as to how this pipeline deals with alternative conformers. It seems like it was possible, but then not actually implemented, in favor of just keeping the A conformer in the test here? Perhaps a demonstration on a structure with many alternative conformations already built to demonstrate the LES method would be illuminating?<br /> * line 100 wording is awkward (‘to use of the Amber ...- to use the Amber ... would be simpler, right?)

      We review non anonymously James Fraser (UCSF), Stephanie Wankowicz (UCSF), and Levi Pierce (RelayTx).

    1. On 2019-09-26 06:39:50, user Jubin Rodriguez wrote:

      Just a very minor thingy: 96 ankyrin-repeat proteins when I did an Interproscan on the NCBI reference genome (NZ_CP025544.1) of Chromulinavorax destructans. Perhaps its worthwhile mentioning in your 'Materials & methods' section as to how you had identified the 98 ankyrin-repeat proteins.

    1. On 2019-09-26 04:40:52, user Ranjan Kumar Sahu wrote:

      It is indeed an interesting piece of work. The links for the supplementary data are disabled in the downloaded PDF. It will be very helpful for the readers to understand the findings in a better way if you can provide/enable the mentioned links.

      Thanks

    1. On 2019-09-26 02:58:11, user Milosh Aritonski wrote:

      This is amazing, i have a 62 year old father with a horrible degenerative eye disease where he completely lost vision in one eye and around 70% vision in the other accompanied with a macular hole. <br /> He's had 7 eye surgeries and now there is way that might restore his vision in a few months simply amazing.

      Are you taking this to clinical trails ??

    1. On 2019-09-25 08:37:45, user Wouter De Coster wrote:

      Dear authors,

      Thank you for the very interesting work.<br /> While not the key message in your paper, I would just like to let you know that the error estimate for ONT sequencing (∼40%) is terribly outdated, and your reference is a 10 years old paper, before ONT sequencing was even available. Current accuracy is about 90-95%. A reference for that number could be https://genomebiology.biome..., which is also already outdated since it is older than a year, but good enough for this purpose.

      Regards,<br /> Wouter

    1. On 2019-09-24 23:05:51, user Michael Alexanian wrote:

      ​Great work that advances our understanding of pluripotency exit and Mesendoderm/Neuroectoderm early commitment in ​stem cells. The synergistic effect of Eomes and Brachyury in repressing pluripotent and neuroectoderm genes is fascinating, and provides insights on how developmental genes controlling early cell-fate decision exert their dual repressor/activator function. Very surprisingly, the authors don't discuss Alexanian et al, 2017 (https://www.nature.com/arti... that describes a pluripotency-specific enhancer distal to Eomes (Meteor) that controls developmental competence of ESCs. When Meteor is deleted, ESCs completely fail to undergo ME and are redirected to Neuroectoderm. Exactly like what happens with the double deletion of Eomes and Brachyury described in this study.

    1. On 2019-09-24 13:40:58, user ani1977 wrote:

      I get a shiver in my spine when i read genomic coordinates so thanks for shedding light on this serious matter! Just one concern that the example of TP53, it is better off mapping the coordinates to:<br /> Chr. Chromosome 17 (human)[1]<br /> Chromosome 17 (human)Genomic location for TP53Genomic location for TP53<br /> Band 17p13.1 Start 7,661,779 bp[1]<br /> End 7,687,550 bp[1], since wikipedia mentions that https://en.wikipedia.org/wi...

    1. On 2019-09-24 02:02:00, user Fraser Lab wrote:

      The major goal of this paper is to put electron density maps on an absolute scale. Ideally, this would rid the world of “sigma” scaling and allow for electron density contours to take on a meaning that could map between different datasets or even over the course of refinement. This is also something that has been attempted previously, most notably (and with obvious conflict of interest on our end) by Lang...Alber, PNAS, 2014. Other important papers that have similar elements include the computational analysis by Shapovalov and Dunbrack, Proteins, 2009 (which examines the relationship between density, atom-type, and B-factor see Fig 4) and experimental work by Brian Matthews (Quillin PNAS 2004 and Liu PNAS 2006, reviewed in https://www.ncbi.nlm.nih.go.... What is exciting about this work is that it is a fresh start to the problem and it is optimistic that structural biologists and other users are eager for an “absolute scale”. However, the major reservations that we have about this paper are that it fails to build on or incorporate some of the lessons of these papers:

      For example - we think they are downloading 2mFo-DFc maps, but fail to account for FOM weighting to get an absolute scale - see Matthews work for a guide on how to do this. The F000 corrections they outline are missing the bulk solvent contribution - this is tricky and dealt with in the Lang/Alber paper. Their B-factor normalization scheme is difficult to follow and seems ad hoc, whereas the Dunbrack paper at least outlines a relationship to the physical meaning of B-factor to accomplish a similar normalization. Finally, when recalculating Fo-Fc maps (or mFo-DFc maps after accounting for FOM weighting), there is no need to normalize as it is already on an absolute scale when “volume” scaling is applied in phenix or (I recall) by default in REFMAC.

      Moreover, despite developing a method to convert electron density values into units of electrons the examples are all based on comparisons within a map where the rank order of strength of voxels does not change. While we applaud their idealism to move the community, an absolute scale is just part of the move beyond sigma scaling, we also need to think about a “confidence” metric (the RAPID part of the Lang paper or the EDIA metric in Meyder et al 2017 that they did not really respond to in the previous review or Beckers et al IUCRJ 2019 for an interesting alternative approach). We haven't reviewed the code, but it is really great that they have put their code up on github and it appears well documented.

      Minor point: The authors switch between using “chain deviation fraction” “chain fraction”, “chain density ratio”, median chain deviation fraction, median chain density ratio, chain median, median of chain density ratio, etc...

      We review non-anonymously, James Fraser and Roberto Efrain Diaz (UCSF)

    1. On 2019-09-23 20:57:43, user Oliver Pescott wrote:

      Given that this paper uses R. ponticum as a case study, it's surprising that there is no mention of the partial introgression from North American Rhododendron species found by Milne & Abbott (2000) in non-native UK populations of R.p. baeticum, as this complicates some of the proposals featured in the discussion here further. Also, assuming that if it has done it once then it could do it again, additional hybridisation after a new introduction for conservation could affect the range in NW Europe, as Milne & Abbott speculate that it already has.

    1. On 2019-09-23 18:15:50, user quagmire wrote:

      It is well written piece of work. The way the experiments are planned<br /> and executed was dope. I never thought HDAC6 can do this much changes <br /> when it gets secreted by neurons. I can see that a lot of investments <br /> have gone into this work, totally worth it. The way you connected <br /> neurite extension results, GSK3 beta and ubiquitination was zonked. Some<br /> supplementary data on gene knock out or western blots would be like <br /> cherry on top.

    1. On 2019-09-23 15:53:00, user Victor Toledo wrote:

      I did not understand one thing about this paper: the sample size is N = 120 according to the authors, however no region alone contains this number of individuals in the HDBR consortium data. Given this fact, how did the authors united data from different regions in the same analysis? It was not mentioned anywhere in this paper and I think this is very important.

    1. On 2019-09-23 14:59:29, user Gabe Al-Ghalith wrote:

      Phenomenal work. I've grabbed the 4,644 bundle, but where are the rest of the 280,000 genomes? All locations pointed to by the article only have the species reps you selected. The other 280,000 genomes are needed for anything other than broad ecological surveys (think variant studies, subspecies clustering, intraspecies targeted strain databases, gene co-abundance tagging, among other potential uses).

      Plus, having the rest of the 280,000 genomes is necessary for other scientists to repeat your analysis.

      Thanks again for doing this tremendous task -- I understand this is a preprint, and I'm looking forward to being able to access all of the genomes in the database when they are made available.

    1. On 2019-09-23 07:51:26, user msp wrote:

      Looks like a similar approach conceptually to our Capture Hi-C differential caller chicdiff (Cairns et al., Bioinformatics 2019) - exciting!

    1. On 2019-09-20 22:50:25, user Steve Rozen wrote:

      This is a very important paper. It shows for the first time that AA I causes liver cancer in mice. This is important because many East Asian liver cancers have been been exposed to AA (DOI: 10.1126/scitranslmed.aan6446). In light of Lu and colleagues' mouse study, then, there is very strong evidence that AA contributed to the development of many of these human liver cancers.

    1. On 2019-09-20 22:42:50, user Mikhail V Matz wrote:

      Nice one! One concern about Fig. 6: comparison of distance matrices must be based on Mantel test, not regular correlation, since data points are not independent. Should not draw trendline and especially the shaded credible interval.

    1. On 2019-09-20 20:49:59, user Charles Warden wrote:

      I would typically use 1 x 50 bp reads for gene expression analysis.

      For fragment counts, if you only align the forward read (1 x 40 bp), are the results pretty much the same as the 2 x 40 bp results?

      If so, I think you respectfully need to change your abstract.

    1. On 2019-09-20 10:54:23, user Abhay Sharma wrote:

      Thank you for pointing out what appears as missing of citations. The citation of MouseMine and HumanMine in the preprint is preceded by the sentence "The gene list sources including publications and databases, along with remarks, if any, are mentioned in Supplementary Tables 1, 2, and 4". Now, in Table S1, the web address of MouseMine and HumanMine are given. Since my manuscript uses data from dozens of sources, making it difficult to add all of them in the main list of references, I mentioned them in the supplementary tables. The added advantage was that I could then also accommodate there remarks on the type of data used in the analysis. In any case, I will address the issue of citing them in the main manuscript itself in a revised version. Thanks again.

    2. On 2019-09-19 12:51:27, user Yo Yehudi wrote:

      Hey all - really nice to see the use of HumanMine and MouseMine in your paper! I'm one of the development team for InterMine, who runs HumanMine. I was wondering if you'd be willing to help us out by citing our papers where you're quoting HumanMine? We have citation guidelines here: https://intermineorg.wordpr..., and MouseMine, which is run by MGI, asks for this paper to be cited if you use MouseMine https://www.ncbi.nlm.nih.go...

      Thanks so much!! :)

    1. On 2019-09-20 07:35:38, user Wiep Klaas Smits wrote:

      Congratulations to the authors on this study: very interesting. I am wondering about the strains used: the authors state that investigations were done in 630Derm, but in the strain table refer to the 630E paper (though the strain is listed as 630Derm). The CRG (pyrE) derived are 630Derm derivatives. Considering the differences between the strains (see https://www.ncbi.nlm.nih.go... these should not be confused.

    1. On 2019-09-19 22:14:35, user Salil Bhate wrote:

      Dear authors,

      Your software package looks great, and we look forward to checking it out here in the lab when it’s published. Thank you also for citing our imaging work, ‘Coordinated cellular neighborhoods orchestrate antitumoral immunity at the colorectal cancer invasive front’.

      It would be nice if in this paper you could provide a comparison of how your conceptual approach to high-parameter spatial analysis relates to that in previous works such as our recent work on cellular neighborhoods in CRC (https://www.biorxiv.org/con..., Shapiro et al. on neighbor analysis and cellular interactions, (https://www.nature.com/arti... and Keren et al. on multicellular tumor and immune spatial structures (https://www.cell.com/cell/p.... This would be really helpful for users that are new to the field of high-parameter imaging analysis.

      Best,<br /> Salil Bhate and Graham Barlow

      Nolan lab, Stanford

    1. On 2019-09-19 16:39:13, user Satoshi Kondō wrote:

      What is about the Jōmon people/Ainu people and which group they are closest related too? How much admixture has a person today from China or Europe from the three ancestral human groups? Regarding haplogroups, is it possible that haplogroups of an individual “back-mutated” or coincidentally appers as another haplogroup? Thank you in advance!

    1. On 2019-09-19 16:14:12, user H. Etchevers wrote:

      I had previously pointed out that Kaspersky antivirus had flagged the website as engaging in phishing. After I asked them to look into it, they have now removed the flag as a false positive. ("It has been confirmed as a false positive. The link will be excluded from our anti-phishing databases.")

    2. On 2019-09-19 08:55:20, user H. Etchevers wrote:

      This is exciting work and I congratulate the authors on the technical and also presentational prowess. In addition, I played around with their online tool which was wonderful. A few days later, there was an update of my Kaspersky Antivirus definitions and reputation files, and suddenly(since week 38 2019), and ironically (given the name similarity) the website is flagged as a "risk". This is reproduced on their online check: by entering http://kasperlab.org/mouseskin into the tool here: https://virusdesk.kaspersky... . I've submitted it for further examination as a likely false positive, since EVERY other site declares it clean on https://www.virustotal.com and others, but it is blacklisted with Kaspersky FYI.

    1. On 2019-09-19 11:55:27, user yochannah wrote:

      Hey y'all - nice preprint!

      This popped up in my alerts as I'm one of the developers of InterMine, the software that PhytoMine is based on. I had a tiny improvement to suggest for the links you've included, e.g. https://phytozome.jgi.doe.g... - this format of link isn't necessarily permanent, so it's possible that in the future that link might not point where you'd expect it to.

      The good news is that you _can_ get permanent links - just go to each page and look for the (admittedly rather tiny) "share" button on the top right. That will give a link that looks like this: https://phytozome.jgi.doe.g... rather than the original link, and it should remain up in the longer term. I hope that helps! :)

      If you need any more help or advice, please contact support@intermine.org or the phytomine support line :)

    1. On 2019-09-19 01:12:07, user Anita Bandrowski wrote:

      Interesting study, we do not check for things like preregistration using SciScore, but we do test for other markers of reproducibility like cell lines (we published a study where that aspect of reproducibility was tested using a part of the tool PMID:30693867). I also just ran your paper through our tool and am attaching the report here (apparently the file is too large so I just copied and pasted the two tables as text). Thought you might appreciate it.

      SciScore: 8 (this is out of 10)

      Below you will find two tables showing the results of SciScore. Your score is calculated based on adherence to guidelines for scientific rigor (Table 1) and identification of key biological resources (Table2). Points are given when SciScore detects appropriate information in the text. Details on each criteria and recommendations on how to improve the score are appended to the bottom of this report.

      Table 1: Rigor Adherence Table

      Institutional Review Board Statement

      IRB: Given that this study did not use human subjects, it was not subject to institutional review board approval.

      Randomization

      DT searched PubMed using the list of ISSN to encompass articles from January 01, 2014 through December 31, 2018. 300 publications were then randomly selected to be included in the analysis.

      Blinding

      Starting on July 11, TA, IF and NV conducted extraction of the remaining 289 publications using a duplicate and blinded method.

      Power Analysis

      not detected.

      Sex as a biological variable

      not detected.

      Table 2: Key Resources Table

      Your Sentences REAGENT or RESOURCE, SOURCE, IDENTIFIER

      Software and Algorithms

      PubMed

      Suggestion: (PubMed, RRID:SCR_004846)( link)

      Google

      Suggestion: (Google, RRID:SCR_017097)( link)

      Microsoft Excel

      Suggestion: (Microsoft Excel, RRID:SCR_016137)( link)

    1. On 2019-09-18 06:54:36, user Jeremiah Stanley wrote:

      Hello authors. It was quite a brave attempt to explore the role of 5HT in macrophages. I have a logical question. There is an interplay of 5HT2B and 5HT7 in modulating the macrophage. So when an antagonist is used against a particular receptor, the 5HT in the medium will be acting more on the other receptor. For example here, the antagonist to 5HT2B was used. without the antagonist, 5HT would be acting on both 2B and 7. But after antagonist addition, 5HT will be acting on only 7. Can this be a reason for the antagonist to not nullify the action of the agonist? Interplay of receptors?

    1. On 2019-09-18 05:48:21, user Johannes Soeding wrote:

      A newer version that includes DNA double-strand break repair as an example for the localization-induction model is available from soeding@mpibpc.mpg.de.

    1. On 2019-09-18 01:41:42, user Valar Dohaeris wrote:

      what is the excitation power at measured for N&B analysis? How does the result compare to PCH for quantification dimerization state?

    1. On 2019-09-17 15:55:00, user Sebastien LEON wrote:

      Very interesting. But just as in the study of Adachi et al., I am wondering how cells discriminate between low and no glucose (since Snf1 is activated in both cases, and anyway the 0.025% glucose should be consumed within minutes...). Very surprising to see marked differences between these conditions !

    1. On 2019-09-17 15:32:04, user Arturo Tozzi cns wrote:

      A MATHEMATICAL COUNTERPART TO BACTERIAL CLONES SHOWING SURPRISING INDIVIDUALITY

      Genetically identical bacteria should all be the same, but in fact, the cells are stubbornly varied individuals. That heterogeneity may be an important adaptation. When groups of identical cells diversify, they can divide up some of their tasks and start to specialize in certain processes. Possibly the bacteria we thought were completely identical were in fact behaving in a not identical way.<br /> https://www.quantamagazine....

      There is a pure mathematical counterpart to this observation. The concepts of “sameness”, “equality”, “belonging together” stand for intertwined levels with mutual interactions. By showing that “matching” description is a very general and malleable concept, a novel testable approach to “identity” that yields helpful insights into physical and biological matters has been provided (https://www.sciencedirect.c.... Indeed, a novel mathematical approach derived from the Borsuk-Ulam theorem, termed bio-BUT, might explain the astonishing biological “multiplicity from identity” of evolving living beings as well as their biochemical arrangements.

      Arturo Tozzi<br /> Center for Nonlinear Science, Department of Physics, University of North Texas, Denton, Texas, USA<br /> 1155 Union Circle, #311427Denton, TX 76203-5017 USA<br /> tozziarturo@libero.it<br /> Arturo.Tozzi@unt.edu

    1. On 2019-09-17 01:35:34, user Nikolay Samusik wrote:

      As evident from the Fig 2f, the authors are getting much better results on the rare cell populations than the original Phenograph. Do you know how much of that is because of using Leiden vs Louvaine and how much due to the 'graph pruning' procedure?

    1. On 2019-09-16 17:11:24, user Lindsey Young wrote:

      The authors of this preprint are ecstatic about the preprint process as a way to make scientific findings accessible to as many people as possible, as early as possible, and to foster constructive and transparent dialog.

      While I appreciate these comments, they relate to a supplementary experiment, which as the commenter says, "is not critical to interpreting the manuscript." The HDX in the supplement to this cryo-EM paper was included to show that the new constructs used in this paper behaved similarly to ones that we previously characterized by HDX in Young et al. PNAS 2016.

      A second important point I would like to raise is one of transparency, as the commenter does not disclose that they are a previous direct competitor (Ohashi et al., Autophagy, 2016) with our own previous HDX work on this same topic (Young et al., PNAS, 2016). Masson's prior role is relevant information that we believe should have been disclosed as part of their posted comment.

    2. On 2019-09-06 13:38:12, user Glenn Masson wrote:

      Very interesting paper, with beautiful EM data. I know this is a preprint, and not the finished article - however, the reporting of HDX-MS data leaves a lot to be desired.

      As it currently stands, the amount of information provided on how the experiments were conducted is too sparse to allow for the repetition of experiments, and the data is reported in such a manner to prevent a complete interpretation of that data. For example - the statement "The deuteron content was adjusted for deuteron gain/loss during pepsin digestion and HPLC.", is insufficiently detailed, was this achieved using a fully deuterated control? How was that produced? How are deuterons potentially gained during the HPLC/digestion process, which is conducted (presumably) in H2O? Not a single peptide's exchange data is presented, and there are no reports of the overall coverage of the protein subunits, nor the redundancy of the data collected. From my understanding of Figure S1E/F(?), there could be as few as 16 peptides covering the entirety of Beclin 1, and 10 covering ATG14.

      Additionally, I have serious concerns about how the data collected. There is no explanation offered on how a single 10-second timepoint can sample the exchange kinetics of an entire complex. The experiment was carried out in duplicate (only), with no mention of the error or variability associated in these measurements.

      The HDX-MS data is not critical to interpreting the manuscript, and, as I stated, the manuscript as a whole is intriguing and worthwhile - but serious attention should be given to the HDX-MS data. Please see the HDX-MS community agreed guidelines paper for the minimum standards for reporting and conducting HDX-MS experiments: https://www.nature.com/arti...

    1. On 2019-09-16 12:20:17, user THERY Manuel wrote:

      These are two great comments. Thank you!

      We have no idea if MT number changes. It is difficult to count them in the nuclear invaginations. <br /> It<br /> is true that some interaction could remain between MTs the nucleus even<br /> after dynein inhibition. But the complete removal of MTs with <br /> nocodazole gave similar results in terms of changes of nucleus shape. <br /> Considering that MT disassembly may have more side effects, due to the <br /> increase of concentration of free tubulin, we decided to stick to dynein<br /> inhibition.

    2. On 2019-09-11 09:31:11, user Susana Godinho wrote:

      this is really interesting! I was wondering if the number of microtubules is also increased in differentiated cells ? and if that is also associated with lobule formation? <br /> Also, what happens if the link between microtubules and NE is lost? for example by over-expressing KASH domain? i do not think Dynein inhibition fully disrupts that interaction since kinesin-1 is also involved in this process.

      fascinating work!<br /> thanks<br /> Susana Godinho

    1. On 2019-09-15 23:33:56, user Kent Willis wrote:

      Great preprint! Excellent topic and interesting science.<br /> On a related note, I am impressed with the formatting - what did you use?

    1. On 2019-09-15 23:18:11, user Alexis Rohou wrote:

      In this manuscript, Pintilie et al introduce a new metric for judging the quality of a 3D map and atomic model obtained from cryoEM (or, presumably, X-ray crystallography). The Q score (between -1.0 and 1.0) is a normalized cross-correlation measure, at each modeled atom, of the similarity of the map features in its immediate neighborhood to what would be expected in a very-high-quality map. The more the map being tested looks like a very-high-quality map, the closer the Q factor tends to 1.0. Noisier or lower-resolution parts of the map will yield Q factors trending towards 0.0.

      The authors highlight a number of desirable features of this proposed metric: it is local down to the level of single atoms; it does not require masking or other processing of the map; it can be used to track map quality as a function of resolution, side chain chemistry, or any other local characteristic; etc.

      Overall, I think this seems like a valuable metric which may well become a standard reportable measure of map/model quality in the future. The manuscript is clear, well-written and covers the bases adequately.

      Before accepting for publication, I recommend the authors address my one concern about all this: the behavior of the Q factor for maps with resolutions better that ~1.5 Å is ill-defined. Of course, this is a “first-world problem”, and presumably very few maps will be at higher resolutions than that. But nevertheless, there will be maps at truly atomic resolution, and in those cases one would hope that a robust metric would yield even better scores than if the map were “only” at, say, 1.6 Å. And here, I’m not sure what to expect. The atomic profile at resolutions of 1 Å or better might be expected to be at least somewhat different from the reference Gaussian the authors calibrated against a 1.54-Å-resolution map, and therefore I worry that the current implementation would give (slightly?) worse Q scores for a 0.9-Å-resolution map than for a 1.6-Å map. I’d suggest the authors test this by running their program on, say, one of the microED maps & models in the PDB, for example EMD-8857 or similar (for testing on amino acid side chains), so that hopefully they can prove me wrong and reassure us that the method is robust also at truly atomic resolutions.

      Beyond that I have the following minor suggestions:<br /> - Mention somewhere that the quality of result will be highly dependent on the (local) filtering/sharpening of the map. <br /> - Perhaps include more details of how the map is sampled to compute the atomic profile. Is it sampled at exact pixel grid positions? If so, with pixel size of 0.615 Å as in EMD-20026, it seems to me the first few points of the atomic profile (say < 1 Å or < 0.5 Å) will be very poorly sampled. So perhaps the authors interpolate between grid points? If so, how? <br /> - I loved the section “Radial plots for solvent atoms”, but it seems to me to be a bit of an aside, and not directly related to the main topic of the paper, of Q factors. It deals with distances between pairs of atoms in the map. A very interesting topic, and the Q factors probably help in yielding more reliable measurements, but still it seems to not fit in very well with the rest of the paper. I would suggest moving this section to an appendix. <br /> - Line 238: fix “total of N maps”

      Alexis Rohou<br /> 3-Sep-2019

    2. On 2019-09-15 03:21:19, user Fraser Lab wrote:

      The major goal of this paper is to develop and benchmark a new metric, Q-score, which aims to quantify the resolvability of atoms, residues, and entire macromolecules in density maps emerging from electron microscopy. The major strength of the paper is the model-directed approach that allowed them to get local metrics for map quality and resolvability with residue- or even atom-scale resolution. Of course, as noted in the manuscript, the method is also applicable to X-ray maps, where the need is less urgent - because of the standard practice of refining B-factors, which have a straightforward physical interpretation in crystallography. B-factor refinement has less of an established grounding in EM, as the single particles themselves cannot have a true “B factor”. Several refinement packages refine a “B-factor”, which ideally would model the imprecision in alignment and genuine conformational differences between particles that cannot be classified away. For example, phenix.real\_space\_refine refines residue-level ADPs by default, and Rosetta and REFMAC can both refine atomic B factors. Regardless of the source of the target map (or X-ray structure factors), a refined B-factor is a function of the conformational variability of the atom, noise, modelling errors, and the refinement restraints. The lack of a thorough accounting of how Q-scores compare and contrast with refined B-factors is the major weakness of this manuscript. Our interpretation of the Q-score metric is that it largely accomplishes the same goal as the B-factor, replacing the process of fitting a Gaussian to the density around an atom with measuring the quality of fit of the Gaussian-like curve of the density to a reference Gaussian. It does so entirely in real space like Rosetta, whereas the Phenix and REFMAC calculations are performed in reciprocal space. A particular strength of this kind of Gaussian-comparison approach is that the results should be largely unaffected by atom or residue ID for non-heavy atoms.

      It is not clear from reading the paper whether significant additional information is provided by the Q-score that is not captured by refined B factors. The authors could address this concern in a few ways. First, we would like to see correlation of calculated B factors with Q-score, both globally and in the case of refined atomic B factors or ADPs. Other resolvability estimates, including local resolution estimation and alternate model-directed metrics, such as the MDFF RMSF (Singharoy et al., 2016) and multimodel convergence RMSD (Herzik et al., 2019). That said, the metric is valuable and probably provides information that is not straightforward to abstract from the refined B-factors of deposited structures. It is also probably faster to recalculate Q-scores than to re-refine B-factors and the Q-score is likely less subject to the biases and restraints of individual refinement programs.

      We also identified a potential implementational weakness of the approach in examining the paper. The method by which the reference Gaussian is calculated in Equations 2-5 relies on the average and standard deviation of all voxel values in the map. Voxel values represent Coulombic potential, but the scalar affecting this relationship is highly variable in cryoEM, with very little consistency. Our concern is that both the mean and standard deviation of the map, which are used to generate the upper and lower bounds of the Gaussian, also may be strongly influenced by factors including masking and, in unmasked maps, the solvent content in the box. Since box sizes are chosen semi-arbitrarily in electron microscopy reconstruction, we are concerned that this selection of a reference Gaussian might result in less robust results when comparing maps that are not all generated with the same solvent content. We would be interested to see the results of comparisons of masked and unmasked maps, as well as maps of varying box sizes relative to the protein mass in the same resolution range, to confirm that the chosen reference Gaussian does not interfere with the ability to use Q-scores effectively in such cases. Similarly, we would expect that B-factor sharpening would have a dramatic effect on the Q-scores calculated for a given map, and would want to see to what degree the resolution vs Q-score correlation is altered by B-factor sharpening. This may be complicated by the incomplete reporting of whether sharpening was performed prior to map deposition in EMDB, so use of highly controlled data on their part may be preferable.

      This method could prove to be a valuable supplement to the local metrics available for assessing map and model quality. In our opinion, the ability of Q-factors to report on poor model quality fit is perhaps underemphasized in this manuscript. Deviations in Figure 5 might indicate poor fit for example. In particular, we would be curious to see how well it would work in cases of model error, in contrast to its use for studying map quality. It would be valuable to determine whether incorrect or underdetermined models can be identified with this approach to measuring the local density around atom. This tool seems able to give statistically useful results for single-atom and single-residue samples, and this could prove useful for identifying poorly modeled regions for further refinement. The preliminary data for this use case presented in the 2019 EM Model challenge ( http://model-compare.emdata... ) is promising, and this may be a worthwhile application of the Q-score in this manuscript or a future one. Conversely, in the absence of discussion of assessing model correctness, we would like to know how sensitive the Q-score metric will be to model errors - both minor displacements and larger registration errors are still relatively common in medium-resolution cryoEM, and ideally a tool used for assessment of local resolvability will be robust to such model errors. Due to the method by which voxels which are closer to other atoms are excluded from Q-score calculations, we expect that registration errors in particular may lead to particularly challenging to parse errors in Q-score.

      Minor points:<br /> The sentence on line 62-64 suggests that masking does the evaluation, rather than allowing the FSC to do so. <br /> The selection of 0.6Å as a standard reference standard deviation based on the reference 1.54Å structure may be problematic as structures get to even higher resolution and lower global B-factors - if the peak becomes sharper than that of the reference Gaussian, scores will decrease, despite the map becoming even more resolvable. Admittedly this is a problem on the distant horizon, as significant resolution improvements beyond 1.54 Å for novel samples may prove very difficult to achieve. However, the <br /> In lines 191 to 198, the authors speculate about the differential rate of dropoff of Q-score with resolution, and both suggestions they raise seem reasonably testable. First, the question of radiation damage might be more readily examined using reconstructions with differential dose, rather than number of particles. Second, the effects of local environment on disorder could be examined by quantifying Q-score as a factor of solvent accessibility.<br /> In table 1 on the right panel, entry 4, the Q-score reported is much larger (2.8) than any other reported score. Should this entry be 0.8?<br /> For the Q-score comparisons of solvents in X-ray and CryoEM maps, the authors argue that X-ray map has better resolved waters. However, there are a number of alternative explanations. One is that X-ray maps typically have lower global B factors than EM maps at the same resolution. This effect would result in a global dampening of scores for cryoEM maps that could be counteracted by B-factor sharpening. <br /> The authors describe doing real space refinement on the solvent atoms in the x-ray map. It would be useful to report the effect this refinement had on the R/Rfree. If this real space refinement resulted in overfitting the atomic positions, the subsequent position and radial distribution comparisons may have been overfit as well.<br /> In general, comparisons of solvent positions between X-ray and CryoEM closely resemble discussion more broadly of solvent radial distribution in bulk vs ordered solvent, and some comparisons to that literature might help to place this comparison in context - does the solvent in cryoEM behave more like bulk solvent compared to the ordered solvent shells seen in crystallography? Does it behave more like solvent at room temperature or under cryogenic conditions?<br /> The authors suggest the use of Q-scores to validate solvent molecule placement by refinement strategies. However, as existing tools such as phenix largely place solvent in atoms at Gaussian peaks in density, the Q-score may not provide orthogonal validation information. It may prove to be a useful estimate of confidence in this context, but this may require additional testing. Again, is there any information added over the individual B-factors refined for the solvent “O” atoms or are they well correlated?<br /> An alternative, but similar metric that is also highly correlated with B-factors and has similar approaches to Qscores with regard to atom “ownership” of voxels is EDIA (Meyder et al., 2017) - this work should be contrasted and cited. Ideally, correlations between individual B-factors, EDIA and Q-factors should be calculated for some standard model/map pairs to ground the discussion of specific situations where the correlations break down. Another related preprint has also been posted recently, but we have not reviewed it in detail: https://www.biorxiv.org/con... <br /> The authors should be commended for their tutorial, open code, and vetting with multiple OSes. Bravo!

      We review non-anonymously, James Fraser (UCSF) and Ben Barad (formerly UCSF, now Genentech, soon to be Scripps)

      References:<br /> Herzik, M.A., Jr, Fraser, J.S., and Lander, G.C. (2019). A Multi-model Approach to Assessing Local and Global Cryo-EM Map Quality. Structure 27, 344–358.e3, PMCID: PMC6365196.<br /> Meyder, A., Nittinger, E., Lange, G., Klein, R., and Rarey, M. (2017). Estimating Electron Density Support for Individual Atoms and Molecular Fragments in X-ray Structures. J. Chem. Inf. Model. 57, 2437–2447.<br /> Singharoy, A., Teo, I., McGreevy, R., Stone, J.E., Zhao, J., and Schulten, K. (2016). Molecular dynamics-based refinement and validation for sub-5 Å cryo-electron microscopy maps. Elife 5, PMCID: PMC4990421.

    1. On 2019-09-14 18:29:07, user Justin Perry wrote:

      While this is a valiant amount of work on a very important topic, the likelihood that the TCR+ macrophages you see ex vivo are because of clearance of T cells by macrophages (RNA, including polyA-RNA, is incredibly stable in the phagolysosome) is high. These would likely not be removed by any of the standard single cell-RNAseq "doublet" removal techniques. The issue of RNA "contamination" has been shown independently by Dennis Discher (https://www.ncbi.nlm.nih.go... and Steffen Jung (https://www.ncbi.nlm.nih.go..., and anecdotally seen by a host of groups attempting RNAseq (especially single cell RNAseq) of macrophages. I would urge caution in interpreting TCR+ macrophages as anything other than a macrophage doing its job of efferocytosis, and be wary of interpreting much from the gene signatures of macrophages because of this potential T cell contamination. Engulfment of T cells by macrophages shows a frustratingly high level of T cell-associated genes, especially prevalent genes such as those associated with signaling. None of the data presented in this preprint negate the likelihood of efferocytosis. In fact, CD68 is most commonly associated with LAMP1 and the endo-lysosomal compartments, and is often used as a marker of phagocytic macrophages in situ. Furthermore, FACS analysis of ex vivo TAMs could just as easily be of a T cell binding to TAMs, a TAM with a partially eaten T cell, or a manifestation of the tissue digestion process, where digestion at 37C for as little as 15-30 minutes can result in transfer of intact proteins (such as intact TCR), trogocytosis, or phagocytosis (like we frustratingly observed and reported previously https://www.cell.com/immuni....

    1. On 2019-09-13 18:09:41, user Timothée wrote:

      As much as I see the need to quantify biases and trends in the hiring process, I have a number of concerns with data collection and data release associated to this paper.

      As far as I can tell, the inference of gender has been done based on names and pictures and pronouns, which is biased, and is actively erasing colleagues that express gender non-normatively, or are read as a different gender.This is not a mere methodological point; it is a practice that is actively harmful to the overall effort on Equity, Diversity and Inclusion, by specifically applying bias to the more marginalized. I think this should be commented in a lot more detail in the manuscript, but I do not think that the methodology is at all reliable.

      Second, this dataset contains nominative information on EU citizens (which is in likely violation of the GDPR), and seems to contains information that was divulged by third parties. As much as I understand that people may have been given their consent to communicate data for the purpose of the analysis, I wonder whether explicit consent for un-masked data publication was given, and what the data retention policy is.

      Finally, I was surprised to see no mention of the IRB approval process. This is likely an oversight on the side of the author, but I wish that the preprint could be amended with the IRB approval, or the clear statement that the approval was not needed.

      We cannot afford a cavalier attitude towards data publication when it involves people, and I do not think that this preprint does a particularly good job at this (which is not a comment on the quality of the underlying scholarship).

    1. On 2019-09-13 07:45:32, user Disha Sharma wrote:

      I have small doubt regarding variant calling after bam files to SNP. What tool or methodology has been used to call variants?

    1. On 2019-09-12 15:56:52, user 崔祖曦 wrote:

      Hi there,

      This is a very nice paper. <br /> I have a quick question regarding the established model. You used 65 SNPs from the cited paper but I couldn't find the same set from the original study. Can you share more details about it?

      Thanks,<br /> Zuxi (Terry) Cui

    1. On 2019-09-12 13:52:30, user Ryan Bell wrote:

      Great preprint! Dr. Huveneers please do check your email for a message from Excision Editing. It contains some important information on some changes we highly recommend to the Abstract. If you can't find it please email editor@excisionediting.com. Again, great work!

    1. On 2019-09-12 13:40:42, user Mollie Brooks wrote:

      I have a few comments on the following section. "Since reproduction data is often overdispersed, we fitted three different model implementations. First, we fitted the models using a Poisson distribution in the lme4 package. Secondly, we also included a subject-level random effect in the model, to control for possible overdispersion. Thirdly, we fitted a model with a Conway-Maxwell-Poisson (CMP) distribution using the glmmTMB package, which numerically estimates the mean and variance separately, and is well suited to deal with overdispersed data [5]. The models where then compared using AIC and the model with lowest AIC selected."

      (1) You may need to give a citation for the statement that reproduction is often overdispersed, as other sources say the contrary for other groups of organisms.

      (2) The Poisson is a simplification of the CMP, so the CMP could go to that fit if it's the best. It's not bad to do both, but not really necessary.

      (3) It isn't really accurate to say that glmmTMB estimates the mean and variance separately. They are allowed to vary independently in the model and Huang (2010) showed that they are orthogonal.

      (4) If you only needed to worry about overdispersion, it might be simpler and more standard to use a negative binomial distribution. The CMP can handle either over- or underdispersed data and that makes it convenient for reproductive data (which is often underdispersed). I couldn't find the specific results in the paper to see estimated dispersion parameters.

      (5) About the subject specific random effect, how does subject differ from individual? Maybe you mean an "observation-level random effect" as described here https://bbolker.github.io/m.... This type of model probably isn't necessary when you're already trying one that can handle overdispersion. Again, it's not bad to do both, but not really necessary.

      Overall, from the little bits I read, it seems like interesting work.

      cheers,<br /> Molllie

    1. On 2019-09-12 11:46:37, user Grimm wrote:

      Nice paper, fresh approach, and very nice sample. I hope the authors will be able to proceed this further, especially regarding testing the correlation to the phylogeny (where lot of progress has been made thanks to NGS nuclear phylogenomic data sets) and ecological groups within the sections.

      The only thing that is missing is how the results relate to the studies of Solomon about the ontogony of oak pollen and the ultrastructural differences that can be seen under SEM (and what TEM tells us about the structure of the pollen wall in oaks).

      Solomon AM. 1983a,b. Pollen morphology and plant taxonomy of white oaks <br /> in eastern North America. American Journal of Botany 70:481–492; <br /> 495–507.

      Solomon studied "only" North American pollen of white (sect. Quercus, not sure he included the evergreen Virentes) and red oaks but the processes a likely not that different in their European counterparts. In principal, the main pollen surface types are distinguished by how far (secondary) sporopollenin masks the primary elements (very little in sect. Ilex, which we consider to represent the primitive state, somewhat in sect. Cerris, heavily up-built in the white and red oaks of subg. Quercus) and presents a case where ontogeny (papers by Solomon) reflects phylogeny (the fit of section-diagnostic pollen surfaces with nuclear phylogenies was what eventually culminated in the updated classification).

    1. On 2019-09-12 07:30:00, user Prof. Calum Semple wrote:

      Effective shunt fraction - eGFR for the lung will be included as a secondary outcome measure in the @BESStudy where we will trial endotracheal surfactant in infants with life threatening #Bronchiolitis

    1. On 2019-09-11 21:35:30, user Charles Warden wrote:

      I thought Figure 15 was interesting: even though the AUC was modest, I have generally been concerned about imputation (and I believe this indicates that it can become more of an issue with >10% missing values).

      In other words, while imputation can have an effect on SNP chip studies, I think that matches my expectation that this will be a bigger problem for low-coverage WGS (lcWGS) data. For example, I would consider those genotypes to be unacceptable for myself (at least for specific variants, or a score where a subset of variants are probably most important for me): http://cdwscience.blogspot....

    1. On 2019-09-11 15:31:45, user Vesta Bahrami wrote:

      Hi,

      A question! What about those Zorastrians who converted to Islam recently? I also have a comment: I am from Iran and I have been told my family (from my mother side) were practicing this religion until 1900. My mom is from a place close to Hamedan in central Iran. My grandpa told us that they were from a big family (Bahrami) and they practiced persian religion not Islam until recent. They do not practice Islam now anyway, but are registered as muslims. They are from Persian_A group (I guess) since they live in that area. And also another comment. In Iran, in old days, people mixed mostly with local people who they shared similar genes with. Do u know if it is the same in other groups that Iranian Zartoshtis? And I know some Bahai people are mixed with zoroastrians in Iran.

    1. On 2019-09-11 12:34:50, user Niels wrote:

      This is a very impressive work and will be quite useful for creating artificial promoters. Unfortunately, it is not exactly clear to me from reading the manuscript, where your reference point ("minimal promoter start") exactly is. Since just a few bases more or less have such a massive effect, it would be essential to know the actual sequences that you have used. Therefore, I would kindly ask you to add at least one annotated sequence, covering at least background and minimal promoter, to the final manuscript and, perhaps, here. Thanks in advance, Niels

    1. On 2019-09-11 10:39:25, user Fabrice Gorrec wrote:

      IMPORTANT UPDATE: For this method applied to clear droplets, I would now<br /> advice to prepare a follow-up screen composed of 16-24 saturated <br /> precipitants (maybe avoiding precipitants that often produce lot of salt<br /> crystals, the main culprits being phosphate-based reagents). When repeating <br /> such screen in a 96-well plate, several of the selected <br /> initial conditions can be tested in one go during the follow-up. Also, when preparing the follow-up <br /> droplets I would now advice a ratio sample to initial condition to <br /> follow-up condition of 2:1:1 (i.e. higher proportion of the follow-up <br /> conditions).

    1. On 2019-09-11 04:41:08, user Dr Clovis Palmer wrote:

      These authors have ignored a complete body of work by us on glycolysis, immunometabolism in HIV. May I suggest they PubMed Palmer, Crowe, HIV or Palmer Crowe, immunometabolism or even immunometabolism/Glut1, HIV.

    1. On 2019-09-11 00:18:12, user Charles Warden wrote:

      It looks like I converted my notes into a blog post since the last version, so I thought it would be good to pass that along:

      http://cdwscience.blogspot....

      My Color lcWGS has even fewer reads than my Nebula lcWGS, but I only had a lcWGS .gVCF from Nebula. So, I would guess the issues would be more exaggerated with the Color lcWGS (and my earlier comment was recommending removing that), but I think this can at least help with that discussion.

      The later blog post related to Color was not as closely related to lcWGS, but it was meant to be read more clearly than the GitHub notes. Nevertheless, I have links for possible re-analysis (among that I have been provided so far) on my PGP page: https://my.pgp-hms.org/prof...

    1. On 2019-09-11 00:08:36, user Charles Warden wrote:

      Very cool - thank you for posting this paper!

      I wonder how Ed Cantin's samples in the Vero cell culture compare (without CRE)? Perhaps this provides a good chance for me to follow-up (and this comment can help remind me about the data)

      For example, I hadn't really thought about it like this (in term of divergence from passage in the monkey cell line), but that data is in the SRA:

      https://www.ncbi.nlm.nih.go...

      There are also other differences, but I will ask Ed if he thinks there is anything from this paper that is useful for that project (or vice versa).

    1. On 2019-09-10 16:30:16, user Alex Crits-Christoph wrote:

      I'd like to thank the authors for sharing this meaningful and careful work. This is an interesting and novel approach to solve a difficult problem in metagenomics - identifying misassembled contigs in metagenomic settings. Currently this problem is mostly only approachable from a perspective of manual curation, so the authors' novel method is sorely needed in the field.

      I have a few questions for the authors after a brief read: (apologies if some of these answers are available on the GitHub associated with the preprint)

      1. What is the respective accuracy / precision / recall on the different types of misassemblies? The misassembly types are "inversion, translocation, relocation, and inter-genome translocation", but each of these have qualitatively different outcomes for researchers than others. Ideally statistics should be reported for each assembly error type, and the distribution of the types of misassemblies predicted in the real datasets should be shown. Unfortunately combining all of the above can cause readers who won't check each pileup manually to make erronous assumptions about the rates and frequencies of different error types.

      2. In the training / test datasets there are a few genomes with > 95% ANI to each other. What percentage of the "inter-genome translocations" and all types of misassemblies are these genome pairs responsible for? How often do we see inter-genome translocations between genomes with ANI <95% and <90% in the simulated data?

      3. The abstract states that close to a 5% contig misassembly rate was observed in real datasets - should this be statement be qualified with the 62% precision, 50% recall metrics?

      4. What is the contig length distribution of each type of misassembly in both the simulated datasets and the real datasets? Is it possible to interpret from the MetaQUAST results where these misassemblies occur in the contigs? A misassemblied 3 kbp contig has significantly different implications from a misassemblied 51 kbp contig (with 50 kbp species A and 1 kbp species B) and a misassembled 50 kbp contig (with 25 kbp species A and 25 kbp species B)

      5. In Figure 5, S5, and S6, can the authors list what they assume the misassembly type to be is from manual curation? Which error types are each of these?

      6. Can the authors take contigs identified as inter-genome translocations from both simulated datasets and in the predicted real data and use BLASTP/BLASTN to demonstrate that these contigs are actually chimeric? Visually the degree and breadth of chimerism is critical to understanding how it affects our data analysis.

      I think that both the tool and the work demonstrated have quite a bit of potential otherwise, thank you for this work.

    1. On 2019-09-10 00:30:16, user Holly Beale wrote:

      Congratulations on your paper. I really enjoyed it.

      A couple of notes: <br /> I'm working on something related in bulk RNA-Seq, and I also did subsetting of fastq with seqtk. The behavior wasn't exactly what I expected. If I used the same random seeds to take two subsets, one with one million reads and the other with two million reads, the second set included all the reads from the first set. I ended up using different random seeds for each subset.

      I think I eventually got it, but I had trouble parsing figure 3. It might be easier to understand if you omitted 2/3 of the groups from each plot. You could include the full figures in the supplement.

    1. On 2019-09-09 17:34:33, user Alexander Jaffe wrote:

      Thanks for this contribution!

      I was surprised, however, to see that bacteria of the Candidate Phyla Radiation have been omitted in a study of reduced ribosomal repertoires. These organisms are characterized by unusual ribosome compositions (lacking L30, and L9, L1 and more in some other cases) as well as by reduced genome and cell size (see https://www.nature.com/arti... for more on this) and thus should be a good fit for your study.

      While CPR genomes have to date mostly been recovered from metagenomes, several genomes from co-cultures exist, particularly from the Saccharibacteria (https://www.pnas.org/conten... for example). Additionally, many CPR MAGs assembled from diverse environments reported by our group and others are complete or near complete (e.g. https://peerj.com/articles/....

      I'd consider adding in some of these genomes to broaden phylogenetic depth in your work and also to probe some unusual ribosome biology - I'm sure there's lots more to discover here!

    1. On 2019-09-08 06:18:22, user Jing Lu wrote:

      I compared the results generated by Kraken and CCMetagen and got a more accurate one by using CCMetagen. Thanks for providing a good tools for metagenomic. I found the abundance description in CCMetagen result is not a integer. Can you have some explain on this? Many thanks.

    1. On 2019-09-07 22:24:31, user Andy wrote:

      1) Intrinsic limitation of using DMNB-Sec to produce Sec-containing proteins. DMNB-Sec is bulky, incorporation of which at the Sec site of native Sec-containing proteins may interfere with protein folding because natural Sec itself is smaller in size and many Sec sites are buried, preventing its general use for producing endogenous selenoproteins.

      2) No activity data. While previous methods for incorporating Sec have demonstrated the activity of the resultant Sec-containing proteins using various biochemical assays, this work did not demonstrate the ACTIVITY of the generated Sec-containing protein. The isoTOP-ABPP method the authors used could just demonstrate that the Sec was accessible for reacting with iodoacetamide after photo-uncaging, yet it did not assay the activity of the MsrB1 protein. Similarly, colocalization results in Figure 3d did not prove the ACTIVITY of MsrB1 in cells either. If the activity of the protein cannot be proven in mammalian cell context, how do we know this method can generate biologically useful Sec-containing protein?

      3) Toxicity of DMNB-Sec. DMNB-Sec was used in low concentration: 12.5 – 100 uM. Does this suggest DMNB-Sec is toxic above 12.5 uM or above 100 uM?

      4) Extremely low incorporation efficiency. Figure 3a shows the incorporation efficiency is extremely low, possibly <1% compared to the Cys control. This incorporation efficiency is dramatically lower than other reported methods, and will not produce sufficient amount of selenoproteins for meaningful studies.

      5) Questionable MS data. MS data in Figure 2c and Figure 3b clearly show other peaks in addition to the main peak. The mass spectrum has low resolution, and the baseline of mass is not visible and the background signal is over suppressed. The mass axis of relative region should be expanded (such as done in ref. 22) to allow readers to check these peaks to determine if other natural amino acids had been misincorporated or modifications to the Sec had occurred. For instance, after DMNB-Sec uncaging, their measured mass is 4 Da off the calculated value, which is way beyond MS accuracy for convincing demonstration of Sec generation. In particular, photo treatment of selenoprotein can cause selenium elimination: e.g. reference 22 reported the generation of dehydroalanine upon photo-uncaging of DMNB-Sec in yeast. While using the same DMNB-Sec and tRNA/synthetase used in reference 22, these authors claim that they did not find dehydroalanine in mammalian cells. However, their MS data are compressed and need careful inspection to check this discrepancy.

      6) Inaccurate and ungrounded description of previous work. A) How did the authors know that 0.2 mM is the NECESSARY concentration for ASec? What is the toxic concentration of DMNB-Sec, higher or lower than 0.2 mM? ASec is much less bulky than DMNB-Sec and closer to Sec in size, which will better fit the Sec site in Sec-containing proteins without interfering protein folding. B) The authors also state that “decaging ASec requires treatment with toxic palladium species, which severely limits its applicability in live mammalian cells”. This is clearly not the case; palladium catalysts have been increasingly used in mammalian cells without toxic concern, as reviewed in (DOI: 10.1038/NCHEMBIO.2024, and recent examples: DOI: 10.1038/ncomms15906; J. Am. Chem. Soc.20161384615118-15121; doi: 10.1002/anie.201906545). C) “Selenoproteins can be produced using expressed protein ligation, but is limited to in vitro studies, and necessitates refolding of the protein” This statement is incorrect, reference 10 and the original paper (J. Am. Chem. Soc.201713993430-3437 “This indicates that, after ligation, SelM folded spontaneously to adopt its native fold.”) showed that the preparation of selenoprotein M by Sec-mediated EPL doesn’t require additional refolding step.

      7) Lack of novelty. Using DMNB-Sec and associated tRNA/synthetase to produce Sec via photo-uncaging has been reported in yeast (ref. 22). A similar DMNB-Cys has been genetically incorporated and photo-uncaged in mammalian cells, neurons, and in vivo (Neuron, 2013, 80:358-370, which is not cited).

      8) Based on the way these authors describe other methods in the introduction, they probably should similarly conclude their work as the following: using DMNB-Sec and associated tRNA/synthetase reported in reference 22, we transferred DMNB-Sec incorporation from yeast to mammalian cells, which was similarly uncaged into Sec as reported in ref. 22. The incorporation efficiency is extremely low (<1%). We incorporated DMNB-Sec in eGFP and MsrB1 in mammalian cells, for which the ACTIVITY was not demonstrated. Our method is “restricted to a resilient model protein eGFP and a single protein MsrB1”, and thus “whether this incorporation system is sufficiently robust to express more sensitive endogenous selenoproteins is unclear”.

    1. On 2019-09-07 20:59:36, user Bill Ritchie wrote:

      I'm confused why you say IRFinder was developed for differential expression. It is not "Wong's IRFinder", it is either Middleton's or Ritchie's.<br /> I am also confused why you say that the NBEAL2 profile is "pre-mRNA" like. What do you mean by this and how did you test this? And why do you say you cant see retained introns; I can see them in the figure.<br /> If you look into the IRFinder papers you will see many of what you call "pre-mRNA" profiles that were found in cytoplasmic fractions and polyA enriched. I would thus be very surprised that they are "pre-mRNA".

    1. On 2019-09-06 19:56:17, user Justin Taylor wrote:

      As mentioned in the article, many of the existing methods for integrating transcriptomic data and genome-scale models of metabolism rely on user-specified thresholds of gene expression, which may induce unwanted bias.

      It might be worth mentioning other methods that do not rely on user-specified thresholds of gene expression. The paper by Lee et al. would be a prime example.

      Improving metabolic flux predictions using absolute gene expression data<br /> https://bmcsystbiol.biomedc...

    1. On 2019-09-06 16:01:45, user Sara wrote:

      This is a great resource for the community. Will you be curating the counts/normalized counts as well as the summary stats? This would be an incredibly useful addition, especially if you are using standardized methods for counting.

    1. On 2019-09-06 15:17:49, user Sikter András dr. wrote:

      I want to cite this (non-published) paper in a scientific chapter of book<br /> . What i Have to do? To mention its DOI?

      Andras

    1. On 2019-09-06 10:16:34, user Bertrand Lynch wrote:

      It does not look fine to use a very outdate database as ARDB from 2009!.....the fact of using low cut offs of identitity only simply means that the obtained hits could be or could NOT be an ARG.....not good methodology. I hope reviewers find those very weak point and assess the paper at it should be

    1. On 2019-09-05 16:30:46, user Arlene wrote:

      The paper mentions an assumed likelihood of the a^yt variant giving a fawn phenotype.

      Those of us with tested a^yt/a^t dogs can attest to the phenotype of a^yt/a^t dogs being black and tan. I have alerted UCDavis, Paw Print Genetics, Vetgen, and Animal Genetics this is the case, and have been attempting to get the word out on this for a few years now.

      These are all a^yt/a^t dogs by test. The owners have provided permission for these photos to be used in the public domain. These are all Tibetan Spaniels where the allele appears to have come in through a tight foundation, by pedigree assessment.

      I have a collection of info on different dogs within different breeds where this allele has been identified. Please feel free to contact me for more info.

      https://uploads.disquscdn.c...

    1. On 2019-09-05 12:07:20, user Jakub Tomek wrote:

      This article was posted on Biorxiv to make it freely and simply available to everyone - future submission into a journal is currently not planned.

      I'll be very grateful for user feedback - substantial points will be reflected in potential future versions of the article.

    1. On 2019-09-05 01:12:03, user Rudy Mikšánek wrote:

      Wow, this is an impressive field experiment that investigates several different factors affecting yield and insect diversity. I can see this being an important piece of work as this crop is being considered for use here in the Upper Midwest.

      Two quick comments/questions:<br /> (1) Can you include the observed (x,y) data points in Figures 1, 3, 5, 6, and 7? The statistical model output is plotted nicely, but it is important to show the experimental data that are being used to fit the statistical model.

      (2) Also, did your statistical model account for spatial autocorrelation since landscape heterogeneity at 500 m is likely correlated with heterogeneity at 1000 m, etc.?

    1. On 2019-09-05 00:03:36, user Rudy Mikšánek wrote:

      Neat study! I don't really have any constructive feedback at this time, but I felt that it was necessary to at least suggest changing the title to "Which stick insects are the stickiest?"

    1. On 2019-09-04 20:06:16, user Martin Macek wrote:

      Nice piece of work!

      It would be great to see (at least in supplementary) how other microclimatic variables performed in comparison to the mean temperature (namely maximum temperature, which otherwise explain understory communities better than mean). And also GDD calculated for the day of flowering of each species.

      And instead of simple aspect*slope(-1,0.5,1) index I would strongly recommend to use diurnal anizotropic heating index defined as: cos(202.5°-aspect)*tan(slope) (Böhner, J., and O. Antonić. 2009. Land-surface parameters specific to topo-climatology. Pages 195–226 in T. Hengl and H. U. Reuter, editors. Geomorphometry: concepts, software, applications. Elsevier,<br /> Amsterdam.)

    1. On 2019-09-02 21:05:28, user Patrick Sexton wrote:

      This is a potentially useful tool. However, the methods do not describe how authors, during manual curation of the data, assessed whether a reported biased agonism profile was sufficiently robust to include in the database. The literature is full of examples where lack of mechanistic understanding of quantitative pharmacology (including partial agonism, system reserve, log versus linear response measures etc) leads to misinterpretation of data in the biased agonism field. Without this, there is a risk that the database may add to misinformation rather than moving the field forward.<br /> Just my 2 cents worth....

    1. On 2019-09-02 17:06:06, user Oleh wrote:

      Great paper!

      Could you please elaborate on the analytical gel-filtration results? Why would the profiles of inactive states have major peaks at completely different retention volumes than the active state peaks with only minute peaks at the retention volumes of the components (OptoNBs and mCherry)? Were the components separately ran at light or dark conditions? Do the OptoNBs exhibit different retention volumes in the dark and illuminated conditions?

      Thank you!

    1. On 2019-09-02 13:32:25, user Bastian Hornung wrote:

      I'm sorry to say, but parts the reported microbiome (Figure 4) looks essentially like a list of common contaminants, see https://www.ncbi.nlm.nih.go... orhttps://www.ncbi.nlm.nih.... .

      While some organisms could probably be there (like the Lactobacillus or Streptococcus), some of the others really indicate that they are not derived from a host-associated microbiome (Bradyrhizobium, Rhizobium, Caulobacter, Methylobacterium) and are common contaminations. I don't think this could be untangeld without the proper use of controls (see https://www.ncbi.nlm.nih.go... )

    1. On 2019-09-02 08:53:55, user Mika Gustafsson wrote:

      Thanks for an interesting article, which in many ways are related to our 2014 article (Gustafsson et al Genome Med 2014, PMC4064311) were we also proposed the same principle of shared and specific disease modules that could stratify patient responses. I recommend you read also that. //Thanks Mika

    1. On 2019-09-02 08:45:11, user Marc Rübsam wrote:

      Greetings,<br /> great preprint. One question:<br /> Where do I find:

      The taxonomy assignment is available as supplementary data

      Cheers,<br /> Marc

    1. On 2019-09-01 09:45:12, user ER stressed wrote:

      The flow in the paper feels strange; jumps between the SERCA2b interaction studies and the PERP studies (independent of the SERCA2b work).

      Perhaps it would progress more naturally by showing that PERP "has a heterogeneous distribution across the PM of un-stressed cells and is actively turned over by the lysosome." Then that "PERP is upregulated following sustained starvation-induced autophagy, which precedes the onset of apoptosis" then "ER stress stabilizes PERP at the PM" and finally that PERP interacts with SERPA2b and co-localises at ER-PM junctions/

      Other comments:<br /> "The findings highlight a novel crosstalk between pro-survival autophagy and pro-death apoptosis pathways and identify, for the first time, accumulation of an apoptosis effector to ER-PM junctions in response to ER stress"<br /> Why use "novel" and "for the first time" in the same sentence? Do you mean that the cross-talk is novel i.e. doesn't happen elsewhere or do you mean that it has been identified "for the first time", if it is the latter then change the phrasing.

      "Data from each biological replicate was assessed for variance within the respective group and only data with similar variance between groups was included in the statistical analysis."<br /> This statement is very vague and sounds dubious. Please be more precise.

      Similarly, as a general writing comment; you use significant interchangeably to mean large, important and unlikely to be a false positive (without defining a level where "significance" was reached). More importantly, most of the sentences only talk about the significance rather than the magnitude and p values are not reported in the results (only in figure legends!).

      Mean and SEM seem like a strange choice for these data. I don't think the readers are interested in your confidence of where the true mean lies, they are more interested in the distribution of the data. I recommend changing to SD, or, if you think a confidence measure is appropriate then switch to an easier to interpret metric like 95%CI.

      Figure 1:<br /> - the phrase "validated a consistent interaction" is an overstatement from Co-IP data. <br /> - In E, it is extremely hard to be convinced of any codistribution of signals between the yellow and red images. Single channel images should be in black and white for maximum resolution and fair comparison. I also recommend quantification rather than relying on this single image. <br /> - "close co-colocaliation" is either a tautology or describes a lack of co-distribution just that some of the signal is quite close but non overlapping. Is this why there is no quantification; as you have not detected colocalisation?<br /> - Based on these data alone, one would interpret that very little interaction occurs. This could be because the venus-PERP or mCherry SERCA2b doesn't function effectively, or that the HALO-PERP interaction is very minor and only occurs in solution rather than in cells.<br /> - On a similar note; the jump to SERCA2b seems abrupt. SERCA2a and c could also interact. I recommend confirming that SERCA2a and c are not expressed in these cells.<br /> - "highlighted a novel interaction" this doesn't make sense, unless you have shown that this interaction doesn't occur in other cell types. It is, at best, a new finding but the interaction, presumably, isn't novel.

      Figure 2<br /> A - missing quite a lot of the MIQE guidelines in terms of how the methods are written up and presented.<br /> How was GAPDH chosen as the only reference transcript/protein?<br /> B - very tight cropping on this blot, please allow at least 2 band widths (and included all uncropped blots as supplemental)<br /> C - the jump to HCT116 cells is quite abrupt in the text. <br /> The labelling of C, D and E could be better; it would help readers to follow this train of experiments if it was clear that C came from HCT116 cells whereas E from p53-/-. Indeed, to make the side-by-side comparison easier, I would recommend using the same time frames and showing the blots of C and E and graphs of C and D together, with the added p53 blot. <br /> All the "data not shown" should be in supplemental figures.

      3A "levels were highest" but no quantification shown. <br /> "The accumulation of ER-localized Venus-PERP was likely due to the BFA-induced inhibition of Golgi-PM trafficking and the retrograde transport of Golgi proteins into the ER." this is very much a discussion point and can not be supported by a single image of a highly stressed cell. Certainly, one cannot conclude that these data "confirmed that PERP reaches the PM via the classical secretory pathway."<br /> 3B These look like different cells that 3A rather than just a different plane?<br /> Labelling the figure more completely would improve it - e.g. what is the yellow signal (which should, again, be white from your black and white detector).

      4B - why no LC3 blot on these samples?<br /> 5E - representative flow plots as supplemental figures? I would like to see how this has been quantified.

      6A what happened to the PERP blot?! Previously they have been very clean.<br /> 6B why the switch to GFP SERCA2b rather than mCherry?<br /> 6C again, the image presentation is poor and hard to believe the interpretation. Quantification missing.

      7 and again, interpreting these images requires are very willing observer!

      "This study has identified a novel crosstalk between the ER stress, autophagy and apoptosis pathways and has highlighted, for the first time, a mechanism of apoptosis regulation at ER-PM junctions."<br /> Again, the use of novel here (in addition to "for the first time"!) is strange. Is the implication that this only happens in these cells?

    1. On 2019-08-31 16:25:05, user Alexander Chamessian wrote:

      This is very exciting. Would the authors be willing to post the supplementary material and the recipes for the RAISIN-seq buffers?

    1. On 2019-08-31 05:38:48, user AbdulB1 wrote:

      Author should do further research about bacterial Flora in the gut if they insist something is different in the gut. They should also make probiotics and perform clinical trials

    1. On 2019-08-30 13:10:32, user Guo-Liang Wang wrote:

      The blast R genes Pi2, Pi9 and Piz-t are allelic on chromosome 6. It is nearly impossible to pyramid these three genes in the same background. It is questionable whether Putra-1 contains all the 3 R genes. It is recommended to do more molecular and phenotypic analyses to confirm Putra-1's genotype.

    1. On 2019-08-29 16:43:25, user Bastian Hornung wrote:

      I'm sorry to say, but the reported microbiome (Figure 2B) looks essentially like a list of common contaminants, see https://www.ncbi.nlm.nih.go... or https://www.ncbi.nlm.nih.go... . While some organisms could probably be there (like the Staphylococcus or Streptococcus), some of the others really indicate that they are not derived from a host-associated microbiome (Delftia, Geobacillus, Aquabacterium), and I don't think this could be untangeld without the proper use of controls (see https://www.ncbi.nlm.nih.go....

    1. On 2019-08-28 21:54:47, user Hanon Mcshea wrote:

      What about "evolve" or a different "e" word besides "enslave," to describe the third step of the eukaryogenesis process? "Evolve" would indicate the point at which Darwinian evolution begins to direct the process.

    1. On 2019-08-28 16:44:04, user Yu Yosean Wang wrote:

      Great work! Thank you for citing our papers. The immuno-SABER paper has officially published on Nature Biotechnology. It is appreciated if you can update the reference list for it.

      Saka, S.K., Wang, Y., Kishi, J.Y., Zhu, A., Zeng, Y., Xie, W., Kirli, K., Yapp, C., Cicconet, M., Beliveau, B.J. and Lapan, S.W., 2019. Immuno-SABER enables highly multiplexed and amplified protein imaging in tissues. Nature biotechnology, pp.1-11.

      Also, I noticed that in the protocol you use a large amount of nonmodified CODEX oligos as blocking reagents. Is this step critical to eliminate nonspecific binding of DNA-conjugated antibodies? Do you worry the oligos themselves create nonspecific signals? Thank you!

      Good luck on the submission.

      best<br /> Yu

    1. On 2019-08-28 13:04:17, user Filipe wrote:

      Dear Nathan C. Medd and colaborators, congratulations for your study it's really interesting. I'd like to pointing only one little mistake in figure of pg 34 about Mogami Viruses structure. According with the image, the glycoprotein signature it's present on ORF 3 (with 685 AA), but, according with a fast BLAST analysis, this ORF represents an hypothetical nucleoprotein and the glycoprotein signature it's present on ORF 1 (with 1157 AA) which make sense in orientation when we compare the Mogami virus structure with Shayang Fly Virus 1 structure (Glyco-VP2-Nucleo-RdRp).

      Again, congratulations for this study.

      All the best.

    1. On 2019-08-27 15:08:28, user Surendra wrote:

      Please note that the posted preprint is significantly different from the published version. However, the conclusions and significance of the study remain the same.

    1. On 2019-08-27 06:55:04, user Jubin Rodriguez wrote:

      Useful tool but why is the 'Issues' tab disabled on GitHub for MicroWineBar? This doesn't augur well for continued software development or for use of the tool by the wider scientific community.

    1. On 2019-08-26 22:19:33, user Sheyanne wrote:

      It is really good study which can provide clue on finding good candidates, very curious about when can be published? Then we can take a look again.

    1. On 2019-08-26 19:10:48, user Richard White III wrote:

      This pre-print doesn't acknowledge the previous work from which it came from back in 2017.<br /> https://peerj.com/preprints...

      If you can take a pre-print and code from one pre-print server. Remove authors then not acknowledge the previous work for with a citation then it puts the whole pre-printing process in question.

    1. On 2019-08-26 14:08:17, user taras Pasternak wrote:

      Great work! How specific was SA effect? In our hand 20 µM SA have a significant effect on leaf, but higher concentration give rather non-specific effect.

    1. On 2019-08-26 13:42:30, user David Haberthür wrote:

      Dea and Matthias equally contributed to this manuscript. I'm thus the second author of this manuscript (and did most of the analysis). Ask me anything if you'd like to know something about the specifics!

    1. On 2019-08-26 07:21:35, user Midhun K Madhu wrote:

      Hi,

      This is just a comment about the introduction part of the article. In page 4 bottom, it is written that,

      “The negative charge in PG molecules favored interaction with positively-charged residues in the intracellular loop 3 (ICL3) and intracellular end of transmembrane helix 6 (TM6). This stabilized the outward movement of TM6 and hence the active state of the b2AR.”

      I wonder how that matches with the first sentence of page 5:

      “In contrast, lipids with negatively-charged PE headgroup formed favorable interactions with the positively-charged residues in the TM6 and stabilized active state of the b2AR”

      Overall head group of PE is neutral and hence the said negative-positive interaction with protein may be unfavorable. Although both sentences convey that the active state is stabilized, the start of the latter sentence suggests that the authors want to give something opposite in the 2nd sentence.

      Please consider this as a confusion while reading.

    1. On 2019-08-25 23:33:52, user Nicola Harris wrote:

      Thanks to @JohannesUMayer and @Malaghan_Inst for their hard work optimizing this approach, it will undoubtedly benefit any laboratory, (including my own), studying intestinal helminth infection.

    1. On 2019-08-24 12:50:04, user WJR wrote:

      Regarding the paper, "Sex solves Haldane's Dilemma" (currently unpublished), by Donal A. Hickey and G. Brian Golding. The following comments concern the paper and its accompanying computer simulation. These comments arise primarily from reading the software code, and may be less obvious from reading the paper.

      SUMMARY:

      The paper needs clarifications and expanded discussion on key points. (1) The simulation is biologically unrealistic in ways that lend to the paper's conclusions. (2) The simulation artificially (and completely) removes the advantages of asexuality, and also artificially decreases the disadvantages of sexuality. (3) The paper thereby reaches the (questionable) conclusion that sex provides faster evolution. The paper will need to clarify these matters, if it is to be successful.

      (a) SELECTIVE ADVANTAGE:

      The paper specifies that the beneficial alleles have a selective advantage of 0.02. However, the ambiguity of that wording might mislead readers. The authors ought explicitly clarify that they mean a homozygote will have an advantage of 0.04.

      That is significant here, because that figure is much higher, (between 4 and 40 times higher), than is typical of the textbooks/papers in this field. This high a selective advantage will need justification. Especially since this high selective advantage is used for each of 100 separate alleles simultaneously.

      (b) STARTING FREQUENCY:

      The simulation begins with a cloned population of identical genomes, and initializes these by randomly creating beneficial alleles at each locus. The starting frequency of these is set to 0.05, (which is 1 out of 20). In other words, each individual, at each diploid locus, has nearly **a ten-percent chance** of possessing a beneficial allele. And this high starting frequency occurs at each of 100 loci simultaneously. This unusually favorable starting situation needs more justification in the paper.

      (c) RANDOM GENETIC DRIFT:

      Sexuality uses a randomized recombination of alleles, while asexuality does not. Because of that, a sexual species experiences more random genetic drift than does an otherwise equivalent asexual species. And this excess genetic drift often eliminates beneficial alleles. These tend to be randomly eliminated when they are yet few in number. In a sexual population, this random genetic drift is like extra genetic 'noise', that can push a rare beneficial allele into extinction.

      In an extremely large sexual population, a newly-minted beneficial allele, will succeed only 2*s percent of the time. For example, a typical selection coefficient, with s=0.01, will be eliminated 98 times out of a hundred. (For s=0.001, it is eliminated 998 times out of a thousand.) The situation is worse for smaller population sizes, because genetic drift is stronger there.

      In the above-described way, genetic drift is a disadvantage to sex. But the simulation minimizes that disadvantage by using a large population size (=100,000), together with high initial frequency (=0.05), together with high selection coefficients (s=0.04). This setup virtually guarantees that none of the beneficial alleles will be lost through this genetic drift. Indeed that is the case, as seen in the posted results of the simulation. This artificially benefits the sexual population in the simulation.

      (d) MUTATION RATE:

      The simulation uses an unusual manner of mutation, where harmful mutations are entirely disallowed. Instead, only a specific type of back-mutation is allowed; where a beneficial allele reverts back to the original allele, (which has a multiplicative fitness contribution of 1.0). In this way, the simulation artificially eliminates the problem of error catastrophe (also known as mutational meltdown), since fitness is automatically never allowed to fall below 1.0.

      Also, the back-mutations occur at an extremely low rate, given by:

      Mutation_rate_per_progeny = MUT_RATE * number_of_loci * 2 * p

      where: <br /> MUT_RATE=1.0e-08, given as a mutation rate per gametic loci<br /> number_of_loci = {1, 2, 4, or 100}, <br /> p is the frequency of the beneficial alleles (which starts near 0.05 and ends near 1.0), <br /> the "2" is because each progeny is a diploid.

      The factor 'p' arises because the back-mutation merely converts an existing beneficial allele back to the original allele. Due to that handling, the mutation rate varies throughout the simulation; it starts low (for p=0.05), and slowly increases by a factor of twenty (for p=1.0). This varying mutation rate is peculiar.

      These back-mutations are the *only* mutations throughout the simulation. (Note: The simulation is hard-coded for 400 generations, with a population size of 100,000 progeny each generation.) Yet the mutation rate is so low that this entire simulation will sometimes experience not even one mutation. This low rate of mutation is trivial, and can be ignored.

      This must be compared with recent measurements of the human mutation rate, which is around 100 new mutations per progeny. That is over 50 million times higher than the highest rate employed in the simulation. The paper needs much more justification of it's handling of harmful mutation. An explicit attempt should be made. [Note: This issue runs far deeper than it first appears.]

      The remaining items (below) address the simulation's handling of sexuality versus asexuality.

      (e) FECUNDITY and REPRODUCTION RATE:

      In the simulation of sexual reproduction, the FECUNDITY is set to 2. That is, for males the FECUNDITY is 2, and for females the FECUNDITY is 2. The authors ought remind readers that such a female would need to produce 4 progeny. This arrangement correctly represents the fact that half the female's reproduction goes toward reproducing her mate's genetic material.

      However, in the simulation of asexuality, the FECUNDITY is likewise set to 2, which is a mistake. It should be 4. That way, the females produce 4 progeny in both cases (sexual versus asexual). We must compare apples to apples.

      Asexuality is twice as efficient at transmitting its genetic material into the next generation. But the simulation artificially cut the asexual reproduction rate in half, thereby disallowing this advantage of asexuality.

      (f) The SLOWING-EFFECT versus STARTING FREQUENCY:

      A human-like population has around 23 chromosome pairs. There is no linkage between alleles on different chromosomes, and such alleles segregate independently. (Also, a human-like population has a somewhat higher recombination rate than used in the simulation.) Because of those things, a collection of, say, 100 different alleles, (randomly distributed across the genome), would expect little or no linkage between them. To a first approximation, they would segregate independently. And this produces a well-known disadvantage of sexual reproduction. That is, yes, sex can bring favored alleles together into one progeny, but it tears them apart just as effectively. (Some theorists describe sexual reproduction as a genetic shredding machine, each generation shredding and re-mixing the genomes.)

      By 'tearing apart' the beneficial combinations of alleles, sex slows evolution. This slowing-effect is strongest when the beneficial alleles are yet rare, at low frequencies. Then, they can only fleetingly exert their combined selective effect, before sexual reproduction separates them again. This is all standard theory.

      This slowing-effect doesn't happen in asexual populations. Once a beneficial combination of alleles is obtained, it is not shredded or separated. Rather, it is inherited, intact, into the next generations.

      This slowing-effect ordinarily places a sexual population at a disadvantage. But the simulation minimizes that disadvantage by starting the beneficial alleles at an extraordinarily high frequency (=0.05), thereby artificially avoiding the worst of the slowing-effect.

      (g) EPISTASIS:

      The above-described slowing-effect is even stronger when there is epistasis. (Epistasis occurs when a group of alleles have a combined selective effect that is much stronger than the sum of their effects taken individually.)

      And the simulation employs strong epistasis. (The epistasis in this simulation occurs through its use of a multiplicative-fitness model with high selection coefficients over many loci.)

      The evolutionary genetics literature regards the following as a robust and firm result: Sex-with-epistasis makes evolution slower than asexuality-with-epistasis. So how does the simulation minimize this slowing-effect? See below.

      (h) CHROMOSOME NUMBER and RECOMBINATION RATE:

      The paper seeks to challenge that prevailing view and show that sex speeds evolution. The paper aims to prove it via simulation. Unfortunately, the simulation attempts it by artificially decreasing one of the classic disadvantages of sex. It does that by reducing the chromosome number to 1, (and also by slightly reducing the recombination rate). This allows the substituting alleles to (unrealistically) experience linkages that would be unexpected in a human-like population. In the simulation, the substituting alleles are all on *one* chromosome; with various groupings effectively linked together as one; transmitted together into progeny as one; exerting their combined selective effect as one; generation after generation. And this situation makes them substitute faster. In other words, the simulation artificially increases the speed under sexuality by mimicking asexuality.

      For this simulation to effectively challenge the prevailing view and resolve this question, a more life-like chromosome number would be needed, (say, 23 to 25). This would be a reasonably simple change to the software. [For example, take the simulation's model with 4 loci, and increase it to 25 chromosomes. Easier still, just let all the alleles segregate independently. There would still be 100 alleles, and the computer run-time would be about the same.]

      (i) THE HORSE RACE:

      After initializing the simulation, no further beneficial alleles are added throughout the duration. You can think of this as lining up many race horses together at a starting gate, then after the start, no further horses are added to the race. In the simulation, (with all the horses lined up at the starting gate), all the beneficial alleles are guaranteed to eventually join-up together within the sexual individuals. But that is forbidden in an asexual population. That is the advantage of sexuality.

      But that setup artificially disallows a major advantage of asexuality. That is, new horses (i.e., new beneficial alleles) are added to the race throughout time, continuously, through mutation. Then an asexual species can more rapidly acquire those. How? As mentioned above, an asexual female's genome effectively has double the reproduction rate of its sexual peers. This allows a fit asexual female to more rapidly increase its sub-population size, and thereby (through having a larger size) more rapidly 'receive' its next beneficial mutation. (For example, if a sub-population is ten times larger, then that group receives its next beneficial mutation ten times sooner, and then the cycle begins anew.) This real advantage of asexuality is explicitly disallowed in the simulation.

      That fact undermines the legitimacy of the simulation for comparing sexual and asexual populations. Fixing this would require substantial alterations to the simulation and paper.

      CONCLUSION:

      There are at least eight distinct ways this simulation is biologically unrealistic, and these give the uncanny appearance of having been tuned to support the authors' conclusions. That is an undesirable result, as we all want a simulation we can rely on, and believe in. I encourage the authors to continue their work (with software upgrades and such), as I believe it can lead to a useful research tool.

    1. On 2019-08-23 18:40:05, user IJ wrote:

      Interesting manuscript!

      However, I would like to point out a minor error. The manuscript incorrectly states that Li and Zhang have disputed the results of Jungreis et al. "Drosophila melanogaster has been shown to have significant functional (“programmed”) readthrough (Jungreis, et al. 2011). While this is disputed (Li and Zhang 2019)..." Li and Zhang found evidence that the readthrough extensions of most of the 307 Drosophila genes found to undergo readthrough via ribosome profiling by Dunn et al. are non-adaptive. However, this set has very little overlap (only 43 genes) with the 283 genes found by Jungreis et al. to show evolutionary signatures of adaptive readthrough. While Li and Zhang have found evidence that most stop codon leakage is non-adaptive, they do not dispute that it can be adaptive for some genes. They state, "That most read-through events are nonadaptive does not preclude the possibility that a small proportion of such events have been co-opted in evolution for certain functions." Thus, Li and Zhang 2019 did not dispute the results of Jungreis et al.

      On a related note, the Dunn et al. 2013 ribosome profiling experiments on readthrough in Drosophila and yeast seem very relevant to your manuscript, and you might considering discussing, or at least citing, their work.

    1. On 2019-08-23 06:39:57, user Javier Gonzalez wrote:

      Our latest paper presents evidence that exercise performed in the fasted- versus fed-state increases intramuscular and whole-body lipid use, and translate into increased muscle adaptation and insulin sensitivity when regularly performed over 6 weeks.

      Exercising before vs after breakfast could, therefore, be a strategy to increase the health benefits of exercise without increasing the intensity or duration of exercise, or the perception of effort.