On 2021-07-01 18:18:58, user vinu arumugham wrote:
You don't seem to have covered T cell homing. T cells induced by injected vaccines will home to the skin. T cells induced by infection will home to the lungs.
On 2021-07-01 18:18:58, user vinu arumugham wrote:
You don't seem to have covered T cell homing. T cells induced by injected vaccines will home to the skin. T cells induced by infection will home to the lungs.
On 2021-07-02 23:04:45, user Ali Salehinejad, PhD wrote:
The peer-reviewed published article (open access) of this preprint can be found below:<br /> https://www.sciencedirect.c...
On 2021-07-03 11:55:18, user Todd Gothard wrote:
This work could be advanced by elaborating the relation to what is referred to as contact rate elsewhere. Heesterbeek "The saturating contact rate in marriage and epidemic models" (J Math Biol 31, 1993) is related.
On 2021-07-04 08:41:03, user DainHendrix wrote:
The JVCI seem to be using this paper as the sole source for justifying an 8-week gap for Pfizer in younger cohorts, despite the fact that the sample for this paper is 172 people aged 80 or over. Not by any stretch a fair representation of the population. It would be hard to argue that it is even possible to get a fair representation of genders, ages and ethnicities with a sample of 172 alone.
With this is mind, what response do the authors give to the fact that this paper is being used to justify the current 8-week policy for second doses of Pfizer and Moderna when this paper does not make any attempt to back up that justification for younger cohorts, especially when the manufacturer and WHO recommend a 21-28 day gap between doses?
On 2021-07-04 13:24:54, user Matthias Maiwald wrote:
Our article has now appeared here: <br /> Wan WY, Thoon KC, Loo LH, Chan KS, Oon LLE, Ramasamy A, Maiwald M. Trends in Respiratory Virus Infections During the COVID-19 Pandemic in Singapore, 2020. JAMA Netw Open. 2021 Jun 1;4(6):e2115973. doi: 10.1001/jamanetworkopen.2021.15973. <br /> https://jamanetwork.com/jou...
On 2021-07-05 19:49:31, user Shmuel wrote:
Main rationale for doxy... 1) inhibits matrix metalloproteinases which are central to ARDS/vasculitides. 2) doxycycline is an anti-oxidant, and oxidative stress is another major component relating to progression of ARDS/cytokine storm. 3) doxycycline has known antiviral effects in vitro.
On 2021-07-06 10:46:11, user Ravi P wrote:
Asymptomatic efficacy of 60% by PCR Testing, Great Find
On 2021-07-09 12:15:54, user disqus_EnN4OEtF9s wrote:
Numbers for asymptomatic COVID19 cases analyzed for efficacy don't add up. Total said to be 67 while 33 in placebo group and 13 in COVAXIN group. https://twitter.com/das_see...
On 2021-07-09 08:37:24, user Md Anwarul Karim Mijan wrote:
Peer-reviewed published version can be found here:<br /> https://www.jceionline.org/...
On 2021-07-13 02:04:31, user Matt Jolley wrote:
Thank you for the data collection. I hope you continue the study for some months. The infected immunity indicated here may be less for other Covid variants.<br /> Delving into recent UK variant Delta sequence confirmed reinfections of 285 as of May 31 out of 22571 sequences by sample date. For Delta prior infection immunity appears similar to two shot vaccination immunity. Just taking the last two or three months of your data if similar infected immunity to vaccinated immunity was tested the smaller prior infected unvaccinated population would be expected to have 1.2 or so cases compared to the 15 from the larger vaccinated population. Thus the very large confidence interval reported.<br /> I prefer seeing person days used as in the Haas et al paper. https://www.thelancet.com/j...<br /> How many cases of reinfection if any occurred within the 90 day post positive sample period? Figure 3. the sums of previous infected and not previously infected both monotonically decrease until increasing in the last column. Better to note the number of persons who have passed the 90 day interval since prior infection as this number would be increasing especially around the beginning of the study. The UK study had a vast majority of possible reinfections sampled just after that 90 day interval.
On 2021-07-15 21:52:38, user Brian Mowrey wrote:
Results for the not-previously-infected group are barely presented/summarized. Without a calendar of absolute case counts, it is very difficult to get an idea of the real infection rate. I gave it a shot, using the relative plot in Figure 3. More at blog link.
Not previously PCR-confirmed-infected infection rate: 2,154 / 49,652 = 4.3%<br /> -Completely Vaccinated infection rate: 15 / 28836 = 0.05%<br /> -Unvaccinated / Incompletely Vaccinated infection rate: 2,139 / 49,652 = 4.3%<br /> -Estimated Day 0 - 80 U/IV infection rate in 150-day units: (1620/49652)(15/8) = 6.1%<br /> -Estimated Day 80 - 150 U/IV infection rate in 150-day units: (529/21,332)(15/7) = 5.3%<br /> -Estimated Day 0 - 150 U/IV real infection rate (1620/49652) + (529/21,332) = 5.7%<br /> https://unglossed.substack....
On 2021-07-17 21:41:27, user David Timmons wrote:
Do we have an estimate of when this article will be peer reviewed and move out of the preprint shadow? Many skeptics ignore anything listed as a preprint, no matter how much work has been done. Including those unvaccinated who test positive for Covid 19 antibodies would put the US population with some form of immunity at about 75%. That means our vulnerable population (unvaccinated with no prior Covid 19 infection) at 25% or less.
On 2022-01-16 16:31:04, user holder66 wrote:
Article is now published; pubmed pmid 35028662
On 2022-01-23 11:48:37, user Michal wrote:
So this study took place between Dec 16, 2020 and May 2021 (5 months). Is there any newer study covering possibility of reinfection with longer period of time included? I relied on this pretty much, believing natural immunity should last longer, but here I am - first infection 20th April 2021 and now 22nd January 2022 and tested positive, third day in bed, wondering if anything would be different if I was vaccinated. After first infection the complications were as follows: depression (2 weeks), after waking up - feeling of liquid in lungs (1 month), short heart palpitations (2 months, every day), brain fog (up until now). 33 years old male, other than that I've always been as right as rain, so just felt like sharing, maybe there's more people like me.
On 2021-07-13 14:29:41, user Larry berube wrote:
So I don't see where they checked the vaccine validity in India. https://www.cnn.com/2021/07...
On 2021-07-13 23:50:52, user Pedro Viana wrote:
Now published on Epilepsia: https://doi.org/10.1111/epi...
On 2022-01-27 14:04:35, user disqus_UJiE4jrszi wrote:
One pre-exposure prophylaxis RCT (McKinnon et al.) is missing.
On 2021-07-16 05:35:56, user Altamir Gomes Bispo Junior wrote:
Hi, I'm one of the authors of [49].
I will make the following suggestions:
1- I see that medical doctors in several countries (India, China, South Korea etc.) are repurposing drugs and nutraceuticals and they are making huge strides on COVID-19 treatment guidelines. It will for sure be interesting to inform the general public about these possibilities, even if these possibilities are not currently acknowledged by organizations such as the WHO.
2- Many of the current COVID-19 vaccines are not sterilizing and thus cannot block transmission efficiently. Virus spread may vary due to different vaccine platforms.
3- Vaccines for respiratory viruses are known to have lower immunogenicity on older age cohorts and to have also a vanishing protection against infection and symptomatic infection. This could be added to the model.
On 2021-07-18 07:09:42, user ndk wrote:
Note the methods and materials section closely, as the entire study period predates the introduction of B.1.617.2.
On 2021-07-21 09:05:50, user haowen guan wrote:
I have a few questions about this guideline:<br /> 1.Could i use this guideline to other genes or panels ?<br /> 2. Is there any chance that i can get the code about In silico prediction of splicing effects in LDLR
On 2021-07-22 17:54:22, user John Aach wrote:
I wonder if the authors of this interesting study might comment on two questions that come to mind: (1) There seems to be no information on whether / how many of the subjects in the 2021 delta cluster had been vaccinated or previously infected. It could be very valuable to know if viral load differed for subjects that were naive to SARS-Cov-2 vs. previously-infected / vaccinated. Was there a reason this wasn't done, or was this tried and found inconclusive? (2) This compares CT data derived from oropharyngeal swabs used and analyzed from the 2021 delta cluster vs. CT numbers derived from swabs used in the 2020 outbreak. Can it be assured that swabs, sample gathering, and analysis protocols used in the 2020 outbreak are sufficiently comparable to those used > 1 year later in the 2021 delta cluster, to ensure that CT numbers don't differ due to batch effects or differences in materials and protocols?
On 2021-07-27 10:59:07, user JustinReilly wrote:
My comment submitted 7/27/2021:
The following letter from Tess Lawrie et al. strongly rebuts this review by Roman et al. I highly recommend reading the letter as it is succinct and presents damning points:
“With misreporting of source data, highly selective study inclusion, ‘cherry picking’ of data within included studies, and conclusions that do not follow from the evidence, this article amounts to disinformation... We respectfully request investigation, and retraction of the article as it stands.”
I join the letter’s signatories in calling for swift retraction. Thank you for your consideration.
On 2021-07-30 08:33:39, user Rob Leeson wrote:
The most important end point is death, as there is no pre hospital treatment for covid in the UK and the most at risk group was the over 65s surely an over 65s split 50-50 with placebo would have been more realistic. This looks a bit like the Tamiflu studies where there was a shorter time to recovery but NO protection for flu progressing to pneumonia and death.
NHS England removed the original link early 2021.
On 2021-08-02 12:01:22, user ingokeck wrote:
Dear authors, thanks for putting this interesting data up for discussion. May I propose to change the analysis from Ct values and give median tissue culture infectious dose (TCID50)/mL instead? This would be much more helpful to interpret the data, as it is obvious that for Ct values higher than 25-28 one would need an unlikely big amount of the sample fluid to infect another person. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7454307/ gives the example of Ct 33 corresponding to 0.007 (TCID50)/mL, i.e. 142 ml of the patient sample would be needed to infect 50% of cell culture samples.
It should also be noted that from cell culture experiments it is known that high RNA counts after a few days in unvaccinated patients do not correlate with infectious virus any more and thus cannot be reliably been used to assess the infectiousness. See https://www.nature.com/articles/s41467-020-20568-4
On 2021-09-12 10:37:13, user 4qmmt wrote:
The level of protection and durability of immunity derived from an immune response to natural infection versus that derived from an immune response to a narrow target like the Spike protein seems perfectly understandable and expected.
Both are natural immune system responses. The only difference is the target. There is no "magic" to the vaccine. Thus, I would be very interested if someone could explain how an person's immune response to a single element of the virus can possibly be better than an immune response in that same person to the whole virus which includes the same single target.
On 2021-09-14 20:36:13, user Dedo v. Krosigk wrote:
Isn't there an very important variable missing? During the follow-up period of June 1 to August 14, 2021 the incidence rose from nearly zero to over 400. But the incidence is not part of the described covariates. So the results of the study are only meaningful if the mean incidence (corresponding to the risk of infection) at the date of infection was comparable between the groups of previously infected and vaccinated.
On 2021-08-27 19:26:07, user Jeremy R. Hammond wrote:
I am confused by the Model 3 analysis. The authors state that they found a significant 0.53-fold decreased risk for reinfection for "those who were both previously infected and received a single dose of the vaccine" compared to those who were previously infected and remained unvaccinated.
However, then they say they conducted a sub-analysis limiting subjects to those whose single dose of vaccine was "administered AFTER the positive RT-PCR test", which "represented 81% of the previously-infected-and-vaccinated study group." This analysis did NOT find a statistically significant decreased risk of reinfection among the previously-infected-and-vaccinated group.
Do I understand correctly, then, that in the main analysis, the immune systems of nearly 20% of "those who were both previously infected and received a single dose of the vaccine" were primed not with infection but with vaccination?
In other words, unless I am misunderstanding, they only saw a significant "benefit" for previously-infected-and-vaccinated individuals when the immunologic priming was with vaccination, with the significance being lost when narrowing the analysis to a true comparison between people whose immune systems were primed by infection. Which might suggest a phenomenon of original antigenic sin rather than a benefit of vaccination for those with preexisting natural immunity. The benefit in this case was not derived from vaccination after infection but from infection after vaccination.
Further, the authors state explicitly that they "could not demonstrate significance" for individuals with prior infection who SUBSEQUENTLY received a single dose of vaccine, which again suggests that their main analysis was NOT a comparison of people who were vaccinated after having acquired natural immunity and people who had natural immunity but were never vaccinated. This comparison was only made in the sub-analysis that did not reach statistical significance.
Confirmation or clarification from the authors would be appreciated.
On 2021-08-28 15:08:52, user Drago Varsas wrote:
"Individuals who were both previously infected with SARS-CoV-2 and given a<br /> single dose of the vaccine gained additional protection against the <br /> Delta variant."Actually not. You get a narrow spectrum protection from the mRNA vaccines versus broad spectrum immunity via your innate immune system. The opposite is true. Covid vaccines weaken your innate immune system.
On 2021-08-29 17:23:02, user Paula wrote:
I have question about the injection experiment for SARS-CoV-2 that is ongoing. What if the experiment includes a placebo group in addition to those who have received an actual shot? Wouldn't that understate the the adverse incidence rate too? The last EUA by the CDC for the Pfizer shots made an oblique reference to a placebo group and may indicate that there actually is a double blind study going on right now and it has been ongoing since the inception of the mass experiment.
On 2021-08-30 22:11:02, user Emmanouil Magiorkinis wrote:
This study is a retrospective observational study trying to answer whether natural COVID-19 infection provides better immunity than vaccination. The problem with such retrospective studies in infectious diseases is that they cannot eliminate the differences between the social networks among the two arms of the study. Social networks are important in the course of spread of infectious diseases. People infected by COVID-19 may have contracted from their social surrounding vice-versa, and in that case those people have an extra immunity firewall which could explain the results. Moreover, natural immunity in this viral disease may be connected with long-term effects such as long COVID, which by default does not leave as an option to let those people contract the virus, because natural immunity may be better.
On 2021-08-31 03:10:10, user Victor Lin wrote:
This study does not factor in data for the severity and symptoms of disease in the cohort that was naturally infected. Surely the varying severities of disease and symptoms would affect the level of natural immunity confered on these people.
Vaccination is a uniform dose. Natural infection is not.
On 2021-08-31 13:17:14, user Lardo wrote:
Even if we assume that the results are accurate and natural immunity provides stronger protection than vaccines, in order to gain natural immunity one has to survive the COVID-19 infection, correct? If so, the question is: is the risk of complications from COVID-19 greater than the risk that comes with getting the vaccine? Since the study doesn't address it, I personally see no point in it whatsoever. I don't care if natural immunity is stronger, since I'd rather not get COVID-19 to begin with.
On 2021-09-03 04:50:38, user Hucello Chuyucello, PhD wrote:
It looks like important factor is missing. Where is the interaction between vaccination and presence of comorbidity?
On 2021-09-06 10:07:40, user Erwin Stark wrote:
As the group of previously infected only consists of survivers, there may be a selection bias excluding those having weak immunity
On 2021-09-06 18:10:49, user michael gula wrote:
Study of 673,000 fully vaccinated. Comorbidities not considered. Finding : Vaccinated indiv's who were not previously infected by covid virus have a 13x greater risk of getting covid than those previously infected. Moreover, there is a 6x greater risk for people fully vaccinated to get covid than people not vaccinated and previously infected.
On 2021-09-10 08:00:24, user Jim Ayers wrote:
I tried to search this page for the word placebo and couldn't find it. Already this is blowing up on the internet.
On 2021-12-06 14:07:04, user Steven Sampson wrote:
Why hasn't this been peer reviewed?
On 2021-08-27 15:48:14, user Kim wrote:
What about those who have never been vaccinated nor had any vaccines?
On 2021-08-29 21:38:00, user MANISH JOSHI wrote:
We must stop ignoring natural immunity - it’s now long overdue<br /> Manish Joshi, MD
This article by Gazit et al is another addition to a growing body of literature supporting the conclusion that natural immunity confers robust, durable, and high-level protection against COVID-19 (1-4). Yet some scientific journals, media outlets, and public policy messaging continue to cast doubt. That doubt has real-world consequences, particulary for resource limited countries. We would like to review available data.
Infection generates immunity. The “SIREN” study in the Lancet addressed the relationships between seropositivity in people with previous COVID-19 infection and subsequent risk of severe acute respiratory syndrome due to SARS-CoV-2 infection over the subsequent 7-12 months (1). Prior infection decreased risk of symptomatic re-infection by 93%. A large cohort study published in JAMA Internal Medicine looked at 3.2 million US patients and showed that the risk of infection was significantly lower (0.3%) in seropositive patients v/s those who are seronegative (3%) (2).
Perhaps even more important to the question of duration of immunity is a recent study that has demonstrated the presence of long-lived memory immune cells in those who have recovered from COVID-19 (3). This implies a prolonged (perhaps years) capacity to respond to new infection with new antibodies.
In contrast to this collective data demonstrating both adequate and long-lasting protection in those who have recovered from COVID-19, the duration of vaccine-induced immunity is not fully known- but breakthrough infections in Israel, Iceland and in the US suggests few months. Before CDC decided to stop collecting data on all breakthrough infections at the end of April, 2021, it reported >10,000 breakthrough infections (2 weeks after completion of vaccination) in the US, with a mortality of ~2% (5). Booster COVID vaccine recommendations have been already announced in Israel and in the US proving ineffectiveness within 6 months.
How should we use the collective data to prioritize vaccination? These new data support simple and logical concepts. The goal of vaccination is to generate memory cells that can recognize SARS-CoV-2 and rapidly generate neutralizing antibodies that either prevent or mitigate both infection and transmission. Those who have survived COVID-19 must almost by definition have mounted an effective immune response; it is not surprising that the evolving literature shows that prior infection decreases vulnerability. In our view, the data suggest that people confirmed to have been infected with SARS-CoV-2 may not need vaccination. We should not be debating the implications of prior infection; we should be debating how to confirm prior infection (6).
Manish Joshi, MD<br /> Thaddeus Bartter, MD<br /> Anita Joshi, BDS, MPH
On 2021-08-23 09:39:14, user Valerio Marra wrote:
Now published in International Journal of Infectious Diseases. DOI: 10.1016/j.ijid.2021.08.016
On 2021-08-25 01:28:05, user David Wiseman wrote:
We really cannot take seriously these scurrilous accusations posted by people who are essentially anonymous or who use identities with no internet footprint whatsoever. It appears that JA is the same person that made a previous comment under the name John Artuli, which has now changed on that comment to JA.. A search on pubmed failed to find a single paper authored by anyone with the name Artuli. On medrxiv there are a handful of comments by a JA made about an unrelated HCQ paper (use this link https://disqus.com/by/johna... "https://disqus.com/by/johnartuli/)") Like the two previous comments posted here, there is a lack of understanding of what this paper has shown.
If you stand by your convictions, then identify yourself, and state with specificity where you believe the errors to be. You can also contact us directly and we will be happy to respond to polite approaches and to make any needed corrections. We made that offer in the previous posts, but there were no responses. So whoever is reading this, unless we post to the contrary, you can assume that "J.A." will not contact us. So now Dr. JA we make that offer again. Contact us directly.
All of these points have been covered more than adequately in previous answers and our revisions. You state: "The altered / falsified data are obvious when looking at the public dataset as no one had a delay from exposure to starting study drug of 7 days."
Go to the dataset, for example the version linked in the Agoraic comment - PEP_Public_Data_01Oct2020.csv dated 10/26/20 . Look at column FS for the variable "exposure_days_to_drugstart" and count how many cells have the value 7. It is 28, matching our tables 1 and 2. As we explain, DUE TO A STILL UNCORRECTED ERROR ON THE PART OF THE ORIGINAL AUTHORS, this really means the numbered day (day 1 = exposure), To get elapsed time, you need to subtract 1, which we did, correcting the problem. And that is explained clearly. After the authors informed us of this error they were supposed to have corrected it with the variable (not in earlier versions) in column GR "Exposure_to_DrugStart". Although the values in column GR SHOULD be smaller by one than those in FS, they are erroneoulsy not. And so there are the same row numbers with a value of 7, totalling 28. So the only way you can make this accusation FROM these data. is to be completely wrong, or have been misinformed by someone else. (it is correct in a later version which we used)<br /> This STILL INCORRECT variable (10/26 version) has been provided to colleagues within the last few months. If for some reason, the link in the Agoraic comment has now changed, then there are several people who downloaded it at the time to verify what we are saying here.
You are regurgitating some of the easily refutable arguments advanced by the authors of the original study made obliquely in various places. In accordance with good etiquette, we invited the original authors to review our original manuscript and to participate as authors.
We strongly suggest that you ask the original authors why they have not, over one year later, issued corrections IN THE NEJM to their original paper stating that rather than subjects receiving study drug overnight, 52% of them received drug later than that.
Although parts of our work are post-hoc, most of it is a re-analysis of data using data that had been omitted from the original report. Even if we are off by one day (which we are not), this does not change the fact that the original stuyd cocnlusions were incorrect and that that HCQ given early enough (1-3 days elapsed time) was associated with a significant reduction in C19. The two studies cited again to support the original conclusions are completely irrelevant as they used longer intervention lags and/or lower doses which .by the PK modeling of the Boulware UMN group were never likely to be effective.
The original paper was one of two papers that effectively shut down HCQ research. How many of the 3.5 million or so lives lost worldwide since then might have been saved had the original study accounted properly for the correct drug shipping times?
On 2021-08-27 07:48:37, user Fish wrote:
The conclusion seems questionable. Whywere the Heath records of the individuals, that the sample groups were selected from, have their record stored in the Mayo clinic system? Wouldn't that imply a preexisting, severe heath condition or disease. Also, did they consider that the vaccine performed the best while it was only available to "essential personel", the people who with the highest risk of exposure to the disease. If many of the people who are infected cov19 express few symptoms and often no symptoms, wouldn't it be safe to assume that these people had a very high probability of preexisting, natural immunity? As the vaccines became more accessible, the efficacy also appears to decline significantly. What portion of the sample sizes had natural immunity prior to vaccination? We know that natural immunity provides a more thorough and effective defense that targets many parts of the complete virus. Where this injection indiscriminately attacks human cells, including the immune system and forces them to produce a spike protein man made based on theoretical models and probability.
On 2021-08-27 08:38:27, user Seb Walsh wrote:
Please note this paper was published open access by BMJ Open on 17/08/21. Available here: https://bmjopen.bmj.com/con...
On 2021-08-27 22:26:34, user Infinite Monkeys wrote:
The number of PhD respondents has decreased from 10,969 in version one of this article to 9,975 in version two, but the number of vaccine hesitant PhD respondents has decreased from ~2,622 (23.9%) to ~1,456 (14.6%). Therefore, 994 respondents were removed, but the number of vaccine hesitant respondents, which should be a subset, decreased by 1,166. As both versions are reporting data for May 2021, there appears to be a discrepancy?
On 2021-08-28 19:27:31, user __ wrote:
Could someone factually explain to a layperson what these results mean?
On 2022-01-25 09:20:10, user Petter Jakobsen wrote:
Now published in PLOS ONE doi: 10.1371/journal.pone.0262232
On 2021-08-30 16:23:11, user Miriam Sturkenboom wrote:
In their discussion the authors erroneously claim to be the first to calculate incidence rates of TTS. EMA funded the ACCESS study to calculate background rates of AESI, including TTS in Europe. ACCESS reported the rates of TTS publicly on the EncePP website on a large population including hospital based data that are of crucial relevance for these rates. (http://www.encepp.eu/phact_... "http://www.encepp.eu/phact_links.shtml)"). The authors do not reference nor compare the rates with the ACCESS data. This is of scientific and public health relevance. The rates for several conditions differ substantially between the projects both of which run in Europe. It would be appropriate that the rates reported here are compared to rates for ACCESS to put the data that is relevant to monitor COVID-19 vaccines, in proper context and to understand the source of the differences.
On 2021-08-30 20:52:36, user Miriam Sturkenboom wrote:
This paper is of public health relevance. Unfortunately the analysis presented does not reflect the analyses presented in the publicly published protocol (http://www.encepp.eu/encepp... "http://www.encepp.eu/encepp/openAttachment/fullProtocolLatest/41574)") which indicated that 7,14,21 and 28 days would be followed and that 28 days would be the key window. The protocol also indicated that the study would be conducted in 6 sites. Currently the authors have presented two separate papers one on CPRD (UK) (this paper) and one on IDIAP (https://papers.ssrn.com/sol... "https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3886421)"), without explicitely stating they were designed and supposed to be analysed together. From a public health perspective it would be key that the data are presented together since separately both are underpowered and concluding there is no safety issue. (e.g. conclusion: "No safety signals were seen for ATE or TTS. Further research is needed to investigate the causality in the observed associations") Pooling of the data collected through the same protocol and common data model and analytics would be logical and very beneficial in this instance
On 2021-08-31 01:53:43, user William Brooks wrote:
The results of the proposed model rely on three questionable assumptions: 1) masks are effective at preventing infection [1]; 2) infection risk decreases as mask usage increases [2]; and 3) masks are more effective than ventilation [3].
However, the authors ignore real-world data challenging these assumptions even though they reference the UK's Events Research Programme (ERP), which found little difference between Phase 1 events with and without mask requirements [4]. Moreover, recent ERP data for large-scale sporting events without mask requirements "demonstrate that mass participation events can be conducted safely, with case numbers comparable to, or lower than community prevalence" [5].
In short, the authors should base their models on real-world data rather than unproven assumptions.
[1] https://www.acpjournals.org...<br /> [2] https://escipub.com/irjph-2...<br /> [3] https://aip.scitation.org/d...<br /> [4] https://www.gov.uk/governme...<br /> [5] https://www.gov.uk/governme...
On 2021-08-31 17:35:12, user Jake David wrote:
Need some help interpreting this: "<br /> Assuming a conservative total of 10 days of school absence per 5 new <br /> infections, there will be an estimated 210 (510, 400) absent days for <br /> the school without any interventions or 140 (120, 76) days with masking <br /> and testing." This *per* school? Do they have any usable data such as *per* student estimates? Thx for any help!
On 2021-09-01 16:24:36, user Brian Schneider wrote:
Well done on providing clear tradeoffs of mitigation protocols.
Any chance you share the code? Not that I want to run the model, however I would like to take your results and adapt them to my specific circumstance. i.e. change the the initial population infections of 0.005 (or 500/100k) to suit my area of 0.0002 (20/100k) and see what the probability of infection is, given the remaining parameters are unchanged. <br /> The SIR model Euler version looks mostly linear, but given your accounting of additional factors such as testing rate, asymptomatic, etc it wouldn't be in the end.
It would be very cool to turn this into a tool, so that parents could say, if the probability of infection is greater than X, I would like to pull my child from school. Where that probability of X is met when I see my school zone transmission rate of Y or more per 100k.
I code in Python, but volunteer to help if interested.
On 2021-09-05 08:03:43, user Michael Tomlinson wrote:
Why does Figure 2B in the paper show hospitalizations for the unvaccinated group peaking on 1 May, when the CDC's own Covid Data Tracker shows all hospitalizations peaking on 9 Jan concurrently with infection rates as you would expect: https://covid.cdc.gov/covid... .
Figure 2B in the paper shows these rates curving over the same period almost inversely to population infection rates, which were dropping from 748 per million to 149 per million over these months.
Meanwhile, hospitalization rates for the vaccinated group are completely flat over the period, and show no response to the sharply varying infection rates.
How can this be?
On 2021-09-06 02:45:16, user William Brooks wrote:
The authors claim that if only high-mortality SA countries like Peru had maintained 90+ GSI throughout 2020, their cumulative deaths would have been lower. However, Fig.1 doesn't show that countries with higher GSI for longer had fewer cumulative deaths; if anything, it shows the opposite since low-mortality Uruguay clearly had the lowest GSI while high-mortality Argentina maintained one of the highest GSI.
Also, the fastest increase in deaths in the seven high-mortality countries was during the Southern Hemisphere winter when they all had GSI around or above 80 similar to the low-mortality countries except Uruguay. This lack of correlation between government policy and mortality outcomes means it's impossible to say "If only country X had locked down earlier/harder, they would have had fewer deaths."
Also, the authors use cumulative deaths, which can only go up regardless of whatever the GSI does. However, if Fig. 1 showed the reported deaths per month rather than cumulative deaths, readers would see that deaths tended to stay flat or decease as winter turned to spring, contradicting the authors' claim about lowering the GSI leading to higher mortality. The authors ignore this obvious seasonality, but it can explain why hard lockdowns in autumn didn't decrease Covid deaths in winter better than changes in GSI.
On 2021-12-06 20:48:32, user Nicholas Morrish wrote:
Is this research team aware of the Ratpenats bat monitoring groups sampling of both bats and surrounding sewage/drainage canals? We can see from their own website https://rius.ratpenats.org/... they have hundreds of samples that are geocached and dated. Due to strict EU laws protecting these animals, sampling bats for disease can be very difficult and requires direct permission from local governments. The Ratpenats also have over 30 bat boxes located across the river from the same WWTP2 facility this research paper used to find the early outbreak; is the research team aware of this or asked permission to sample such boxes? https://pbs.twimg.com/media...
On 2021-12-09 21:54:04, user 1969er Kornwestheim wrote:
Any Infos about peer reviewing?
On 2021-12-11 09:16:47, user degodified wrote:
This paper is, Im afraid, full of holes, There is no control group (why not?) and comparing local to national rates introduced bias. At the least it needs reproducing in a better trial. It is already being used by Quack doctors to scare people away from vaccines.
On 2021-12-15 03:22:39, user Sean Bearly wrote:
A study of children born during the pandemic means a study of children less than two years old. It is preposterous to think we can come to consensus about the results. We might as well do studies of children born during Obama presidency vs the Trump presidency. Also, I believe our desire to get children into public indoctrination as early as possible has resulted in a desire to show that missing early in-person indoctrination is the worse thing that can happen not just to children but to the parents who then have to deal with the little monsters. Many home schoolers chose that route based on multiple studies of early childhood development which showed that a child that enters school at the 6th grade level, even with no previous home schooling, quickly comes up to speed and often outpaces the other children within one school year. There are many reasons for this but anyone interested can find that information.
The Obama administration did a study of the effectiveness of early education and promised to stop funding for the Head Start program if data showed that it wasn't money well spent. The study showed that Head Start gave most children an initial boost in education, but that it was lost within a few years and Head Start graduates by the third grade were no smarter, better behaved, etc. than the children who did not go into the program. Head Start was not cancelled though because of several reasons, one being parents had learned to depend on the program for child care, and also because the teacher's unions fought to keep the program.
This pandemic for certain has been difficult for many. But this is not the worse thing that can happen to a child. Children have weathered much worse without losing 20 points on their IQ score. I am reluctant to believe that kind of drop can even be measured in children under 2.
On 2021-12-16 12:03:32, user J W wrote:
The result are interesting, however the discussion is biased due to scientifically irrelevant political concerns. The impossibility of comparing the effectiveness of specific vaccines among themselves and with respect to reinfections can be solved by age stratification for which data is available. The other discussed concerns are of minor impact, are to be treated within statistical uncertainty and last not least they apply for the study od waning immunity itself. The people vaccinated early are in no way statistically the same ones as the one with the delay.
On 2021-12-21 15:40:13, user aleksj wrote:
For Slovenia, the leading dashboard (and #1 search term in the country) has somehow been omitted https://covid-19.sledilnik....
On 2021-12-21 20:45:47, user Martin Manuel Ledesma wrote:
It is an essential paper, but sadly, the unvaccinated group is composed of people with higher rates of comorbidities and complications, so it is highly unfair compared to a vaccinated group with lower rates of comorbidities. Therefore, it is impossible to derive the conclusion that they intended to do, and the conclusion would be that immunocompromised leads to an evolution of SARS-CoV-2.
What they found is similar to described in this paper:
Recurrent deletions in the SARS-CoV-2 spike glycoprotein drive antibody escape. DOI: 10.1126/science.abf6950.
The problem is in the immunocompromised rather than in the unvaccinated.
On 2021-12-22 03:16:08, user Kimihito Ito wrote:
Page 2: “Such methods often model the frequency of lineages using multinomial logistic regression [6,7]”
Ito et al. [7] does not use multinomial logistic regression. Instead, the paper [7] formulates the selective advantage using the ratio of the effective (instantaneous) reproduction numbers, which is called relative instantaneous reproduction number (R_RI).
On 2021-12-26 13:57:47, user bat9991 wrote:
This is a faulty study, and will be either corrected or retracted once peer reviewed!<br /> You cannot exclude "Previously SARS-CoV-2 PCR-positive individuals" and correlate VE against the same cohort you just removed a significant percentage of them!
You are skewing your test data significantly towards the unvaccinated cohort (which has much higher rate of PCR positive than the vaccinated cohort)
If negative VE was not a clue for the faulty calculations, I don't know what would be!
On 2022-01-03 02:35:09, user Mike wrote:
This study shows that after three months the vaccine effectiveness of Pfizer & Moderna against Omicron is actually negative. Pfizer customers are 76.5% more likely and Moderna customers are 39.3% more likely to be infected than unvaxxed people.
On 2022-01-13 01:08:32, user Dr. Marvin Lara wrote:
If it gives a negative efficacy after 90 days. Does that mean it is actually destroying your immune system?
On 2021-12-28 10:05:23, user Elyta Siregar wrote:
i want to ask, how to get survival probability?
On 2021-12-29 00:53:32, user madmathemagician wrote:
The world (cfr. twitter references above) cites this article as evidence that "analysis concludes that, as a general tendency, the more a country vaccinates the less reliable the data it shares".
A conclusion not supported by its flawed analysis, used for political propaganda.
On 2021-12-30 03:08:25, user Weiwen Liang wrote:
Not clear about the vaccination background in these 40 individuals. If I remember correctly, 3c3A clade circulated in the US during 2018-early2020, and Kansas17 in this clade was the H3N2 vaccine component for 2019-2020 in Northern Hemisphere. It was a K at 160 despite cell or egg-derived vaccine. Perhaps antibodies to 2a1 egg-adapted (also K at 160) in pre-vaccination figure were mainly induced by the previous infection/vaccination.
On 2021-12-30 21:07:17, user madmathemagician wrote:
Apart from using COVID-19 related data, it has no relevance to the field of medical, clinical, and related health sciences. It does not belong on medrxiv.org.
On 2022-01-02 22:26:52, user madmathemagician wrote:
I believe the dataset used in this article is
https://www.ecdc.europa.eu/...
but it is not linked in the article.
On 2022-01-06 10:42:35, user maa jdl wrote:
This paper is an elementary statistical exercice.<br /> At least one paper on this topic with a broader scope and a deeper analysis has already been published. ( https://lnkd.in/e9stiJMD )<br /> This paper does not try to understand "what is behind" these observations.<br /> It also does not discuss if the Benford's law should apply to these data for at least some reason and other conditions of applicability like the sample size or the physical process assumed to generate these data.<br /> Most importantly, without a scientific analysis, this paper could easily be easily used for "conspirationist theories", and has already been used so!<br /> "Have the covid data been manipulated? " That's the question behind this exercise, for many.<br /> I would say that OBVIOUSLY the data have been manipulated!<br /> But NOT in the sense assumed by conspirationists!<br /> Actually all governments have taken measures to limit the development of the pandemic's waves. And this is typically a "manipulation of the data", since this is precisely aimed at moditiying the figures! This argument also shows that the pursuit of the Benford's law for covid data is basically not a scientific endeavour! It doesn't answer a good question! Not more than numerology!
See also: https://lnkd.in/e8H7JPXh
On 2022-01-23 21:56:36, user maa jdl wrote:
There is another published paper on this topic:<br /> https://journals.sagepub.co...<br /> The discussion is a bit deeper.<br /> But the conclusion is similarly naïve and wrong.<br /> This other paper also assumes the Benford law is a kind of data validation.<br /> Which it is not.<br /> And this paper -of course- conclude "germany did well", while actually this is pure chance!<br /> And also that Iran did the worst! Which is an obvious consequence of the lack of testing capacity. With this loaw capacity, the number of cases varies of less than two decades! Which makes it very improbable to "comply" with the Benford's law.<br /> The goodness of fit to the Benford law for the cases just represents the history of the epidemic in a country as well as the testing which are prformed.<br /> As I said previously, a simple picture of the data can explain why the Benford law is well or not well satisfied. There is even no need for a statistical test! It is obvious, we just need to make a good picture of the data and open our eyes!<br /> Unfortunately, there is no way to insert a picture here.
On 2022-01-02 14:47:58, user Nathi Mdladla wrote:
I hope the peer review process will be able to pick up the challenges and significant confounders of this study’s conclusions which are too overreaching in the context of Omicron.
The South African 3rd/Delta Wave ended in September 2021. Between that period and Mid-November South Africa was in between waves. Boosting and a major drive to vaccinate in SA, started at the same time in Mid-November. The 4th wave has been mild for everyone, whether vaccinated or unvaccinated.
Healthcare workers are a very difficult group to study in subsequent waves as the assumptions of an only vaccine benefit negate important confounders:<br /> 1. the at-risk individuals either stopped working in the first wave or were maximally affected in that wave already. <br /> 2. You can’t kill the same person twice - those at risk of mortality had either died in the last three waves of they were exposed and their survival could not be solely attributed to vaccines<br /> 3. A number of healthcare workers had been exposed in the prior two waves before delta and already had natural immunity which is known to have significant protection against re-infection and severe disease up to Omicron. Without accounting for healthcare workers with prior infection who received the vaccine, the vaccine effect can be over-exaggerated
Now coming to the most important confounder, making this study invalid and probably not worth publishing, is the fact that it stops on the 17th December, when a lot more has happened in South Africa beyond that date:<br /> - Omicron hospitalisations have been significantly lower compared to Delta or the 3rd wave in a country with <30% “full vaccination”. The benefit of the J&J vaccine should be done in this backdrop and not only based on the narrow healthcare group.<br /> - there’s an “observed” but yet undocumented significant breakthrough infection rate in the vaccinated healthcare workers who were recently boosted, leaving a question on the effectiveness of boosters against infection - a more important parameter for healthcare workers as it impacts ability to work…
This seems to be a rushed publication, without addressing the broader issues of the J&J vaccine:<br /> - when is the next booster dose, considering that it’s efficacy wanes within 2months and it should have been a double dose vaccine from the beginning (as realised in the US in April/May 2020)<br /> - what is the rate of breakthrough infections for this vaccine amongst healthcare workers<br /> - what is the benefit on severe disease and mortality outside the healthcare worker population which has many confounders?<br /> - and lastly, the benefits of the vaccine on any morbidity or mortality parameters cannot be de-coupled from the adverse events and mortalities that occurred before the “determined vaccinated period” of 4weeks whether they are proven to be associated or not. This is the reason the USA FDA is not considering J&J as it’s primary vaccine yet - efficacy and adverse events challenges/concerns.
On 2022-01-03 01:52:37, user Mike Austin wrote:
Looking for Table S4 referenced in this document? The link to Supplementary material is hidden on the first four tabs of this page. You can find it here: https://www.medrxiv.org/content/10.1101/2021.07.28.21261159v1.supplementary-material
On 2022-01-06 13:41:46, user Kenneth Morton wrote:
Such a shame that with such a complete dataset, the unvaccinated results have purposely been contaminated with the 'single jabbed' and have also not been split between those previously uninfected and those who have been infected and recovered previously.
On 2022-01-07 14:35:45, user SurroundedByKnobs wrote:
Comparing households infected with the Omicron to Delta VOC, we found an 1.17 (95%-CI: 0.99-1.38) times higher SAR for unvaccinated, 2.61 times (95%-CI: 2.34-2.90) higher for fully vaccinated and 3.66 (95%-CI: 2.65-5.05) times higher for booster-vaccinated individuals, demonstrating strong evidence of immune evasiveness of the Omicron VOC.
Our findings confirm that the rapid spread of the Omicron VOC primarily can be ascribed to the immune evasiveness rather than an inherent increase in the basic transmissibility.
1.17 (95%-CI: 0.99-1.38) times higher SAR for unvaccinated<br /> 2.61 times (95%-CI: 2.34-2.90) higher for fully vaccinated<br /> 3.66 (95%-CI: 2.65-5.05) times higher for booster-vaccinated individuals
1.17 times higher is less than 2.61 and 3.66 times higher...
Am I reading this wrong? Or is this presented in a confusing way on purpose?
On 2022-01-26 15:10:58, user Siguna Mueller, PhD, PhD wrote:
Thank you for a very detailed study - looking into different questions (each of those relevant in their own right). I have one question re "time since vaccination." Fig 4 in the appendix provides the chart for up to 240 days. This seems to me that this only includes those you classify as "fully vaccinated." Such long time frame would not be possible for the booster vaccinated, as these were only made available recently. If so, your findings, based on the study design (by necessity) would only give you preliminary insights into rather short-term effects of boosters in this regard. Or am I missing something? Thank you.
On 2022-01-07 00:38:04, user disqus_8AVEuorTBu wrote:
Given the authors intend to move away from characterizing individual mutations toward representing the "language" of genomes, it may be worth comparing their discrete measure of genetic distintiveness with natural language models often applied to biological sequences (e.g., doi: 10.1101/2021.05.25.445601, 10.1126/science.abd7331, 10.1016/j.csbj.2021.03.022). The continuous distances between the other models' embeddings may provide additional information not captured by this distinctiveness.
On 2022-01-07 08:02:48, user Crimelord Canada wrote:
That's not the only question that needs to be answered. "Does excluding unvaccinated individuals reduce their rate of infection?" is equally important to know since the unvaccinated are taking up the largest share of health care resources by far. In my jurisdiction the 11.7% unvaccinated are currently 63.3% of occupied ICU beds.
On 2022-01-10 10:36:43, user Zeph wrote:
If I'm understanding this, it's based on a one day event model. So for example, if one was going to have a wedding, this might give some relevant data about how many unvaccinated people would need to be excluded to avoid one new infection at that event.
It is not calculating the risk over, say, six months - which might contain just that one wedding, or might include going to night clubs every week, or to work every day. Those longer term scenarios would require different calculations.
Is that a fair summary of it's application?
On 2022-01-16 09:33:16, user Joel Green wrote:
Can the authors please clarify my understanding of their data, because it doesn't appear to make sense to me.
The authors calculations appear to state that in a home inhabited by 5 people, and 1 person is infected, that 18 people need to be excluded from the home in order to prevent Sars-Cov-2 transmission within that home?
Easy example to follow using the graphs in the paper:<br /> - Page 15, Household Graph, third from left<br /> - 20% baseline infection risk, NNE = approximately 18.<br /> - From Page 6: Baseline infection risk "is the current point-prevalence of infectious cases"<br /> - From Page 2: "excluding unvaccinated people to reduce transmissions is described, called the number needed to exclude (NNE)"
Applying those numbers:<br /> - A household of 5 unvaccinated people, 1 of whom is infected<br /> - This represents a baseline infection risk of 20%<br /> - According to the graph on page 15, and according to the TITLE of this paper:<br /> "The number of unvaccinated people needed to exclude to prevent SARS-CoV-2 transmissions" = 18
Q1: Can the authors explain why their research shows that 18 people need to be excluded from a home that only contains 5 people in order to prevent Sars-Cov-2 transmission where one person in the home is already infected?
Q2: Will the authors address the valid questions raised by others in these comments and explain exactly what an "event" comprises of?
Q3: Why do the authors believe that calculating the NNE from 1 single event, one time only, be used to draw a conclusion about the benefits of isolation over the course of several months? It is clear that the unvaccinated would be be excluded from several events over the course of a covid wave but I'm not sure where they have come up with a cumulative figure.
On 2022-01-08 00:31:42, user darhova wrote:
Had they used the right denominator (infected instead of testing positive) they would have found the COVID risk to be closer to 1/3 of that listed. This is because there is about 2x as many non-tested infections. Non-tested infections are obviously more mild or asymptomatic, thus cause little or no myocarditis. Note, if you assume a 50% natural immunity rate, and a 25% probability of catching a myocarditis causing variant (non-Omicron), the COVID risk is almost statistically equal to natural.
On 2022-01-12 08:03:35, user Olivier Archambault Bouffard wrote:
Was there any crp follow up after those 14 days?
On 2022-01-12 15:14:05, user ABR wrote:
This "study", like most other non-peer-reviewed ones being cited by the media, makes no attempt to control for vaccination status. Simply put, if Omicron cases consist of 80% vaccinated and 20% unvaccinated, whereas Delta cases consist of 20% vaccinated and 80% unvaccinated, then you will expect that out of all omicron patients, a smaller percentage is going to get severe disease, simply because more of them have been vaccinated, which we know protects against severe disease. TLDR, there is NO information here, move on.
On 2022-01-13 11:53:29, user kdrl nakle wrote:
Most of Omicron hospitalized cases are double vaccinated and most of Delta cases are unvaccinated. Meaning the authors failed to conduct multivariate analysis, of course you need some work and knowledge to do that.
On 2022-01-14 04:06:37, user Sir Henry wrote:
Why are the hazard ratios above one for vaccines in Table 2?
On 2022-01-16 20:08:17, user Daniel Halperin wrote:
Although the raw data from this study might suggest that vaccination offers little protection against risk of hospitalization from Omicron, the adjusted risk analysis is more hopeful. Examining cases from all clinical settings, the impact of vaccines against Omicron still looks weak, yet when restricting the analysis to cases from outpatient settings (which represented the vast majority of cases), the effect of full mRNA vaccination (2 doses) appears to be about a 65 percent reduction in risk of hospitalization, after adjustment. (Compared to about an 85 percent reduction against Delta.) However, there does not appear to be any difference in risk for people having received 2 versus 3 doses (against either Omicron or Delta), which would seem notable given the current policy focus to address the Omicron surge through administering and promoting booster shots?
On 2022-01-18 19:12:04, user Charles R. Twardy wrote:
This July 14 upload appears to be an inadvertent duplicate of the 8 June upload that links to the eventual publication in *Cancer*. (HTT LTran for noticing.)
Cancer pub doi: 10.1002/cncr.33130.
On 2022-01-21 13:57:00, user Adam Capoferri wrote:
This work is now published in Viruses: https://www.mdpi.com/1999-4...
On 2022-01-21 22:21:24, user Jeff Brender wrote:
The 90% cytopathic effect (CPE) was assessed visually, if even a slight damage to the monolayer (1-3 «plaques») was observed in the well.
Was the sample blinded?
On 2022-01-28 20:33:24, user cindy wrote:
How was the control group assessed/reassessed? It may be beneficial to make it clearer that the control group is being reassessed?
On 2025-08-26 18:45:14, user Laura Hemmer wrote:
Thank you to this expert group on undertaking this needed and carefully-executed initiative to help improve diagnostic accuracy of IONM studies! I have a few minor comments as follow below for your consideration.
-In the discussion in the Introduction that lists applicable guidelines, the updated ASNM SSEP position statement published in 2024 could also be a helpful reference here for completeness and particularly for its discussion regarding anesthetic and physiologic factors that can impact SSEPs as well the section on interpretation and outcomes, which has some discussion on the interpretation of reversible evoked potential changes. (J Clin Monit Comput. 2024 Oct;38(5):1003-1042.)
-In the methods section, “STARD dementia” should likely have a reference noted.
-Please pay attention to the tense used for the portion regarding community engagement and feedback (e.g. abstract methods notes Phase 3 will include broader community…” as is starting to occur now, but then the results portion in the abstract somehow notes what was already emphasized by community feedback. Similarly, the Results Overview in the manuscript seems to indicate the results of community engagement and dissemination, even though it appears community engagement is just now occurring.) This may be confusing for readers.
-Phase 3 in the Abstract Methods portion notes that this will include broader community feedback, but in the manuscript, it appears community feedback is actually Part 4 of Phase 2 (“Community Engagement and Consensus Building” and Phase 3 is actually the dissemination of the final checklist. Please clarify.
-In the Results section, part #2, please consider if additional details of your assessment of adherence to the STARD checklist across 12 peer-reviewed publications should be made more fully available, such as adding these 12 references to Supplementary Content.
-Is the Results section, part #4 accurate yet (i.e. already officially endorsed by 3 international societies) or just anticipated still? These societies will need to be stated before publication.
-For anesthesia reporting in IONM studies, consider if more details regarding anesthetic technique could be useful. For example, what if additional anesthetic adjunctive/multimodal agents are also incorporated into the anesthetic regimen beyond just TIVA, inhalational, or mixed? We know from the literature, for example, that ketamine in different doses can impact MEP amplitude differently. Also, inhalational amount (e.g. MAC) should be noted when a “mixed” inhalational and intravenous hypnotic anesthetic regimen in being administered, as further evoked potential signal degradation would generally be expected with higher MAC levels.
-Some of the anesthetic reporting details discussed in the results section are really more physiological details, so should the heading be something like “Anesthesia and Physiologic Reporting in IONM Studies” instead of just “Anesthesia Reporting in IONM studies” perhaps?
-For patient demographics, in addition to the examples given in the document, including height, weight, etc., please consider noting that studies should also include other pertinent medical comorbidities for IONM purposes, such as the presence of diabetes mellitus and associated neuropathy which may make it harder to obtain robust baseline evoked potentials. Table 2 notes “clinical characteristics”, but I wonder if medical comorbidities that would be particularly pertinent to IONM and that may make even obtaining adequate, robust baseline signals difficult should be more clearly stated in the document and/or Table 2? It is helpful that Table 2, in the clinical characteristics of participants section (#20), does state that baseline IONM data should be reported.
-Reversibility of IONM changes is well covered by the authors in its own dedicated section within the Results section of the manuscript. Recommendations by the authors on how to best handle all evoked potential deteriorations is also clearly given in the same area of the results section. This important discussion and recommendation by the group, gets a little diluted and confusing when it is re-addressed shortly afterwards still in the results section under “Alternate evaluation framework in IONM”. Please consider if the repetition here is fully needed, or perhaps this area could refer back to the very well-stated section previous in “Reversibility of IONM changes”? Also the section “Alternate evaluation framework in IONM” might benefit from more clear recommendations from the expert working group.
-Consider if, from the 3rd sentence to the end of the 1st paragraph in the Discussion section, is actually needed. It is pretty redundant from earlier coverage in the document. For conciseness, could move the 2nd paragraph content to just after the 2nd sentence in the 1st paragraph in the Discussion section if desired.
-Anesthesia techniques definition is very basic in Table 1. For readers who do not carefully read the manuscript and refer more to the Tables only, should more detail be given here or at least could note to see the manuscript text content? Similarly, no mention of anesthesia appears in Table 2, which is the actual checklist being presented. Since standardized reporting of anesthesia-related variables is critical for IONM diagnostic accuracy studies, should anesthesia reporting information appear in the Table 2 checklist?
-Should studies be asked to more clearly state how it was determined that adequate baseline evoked potential signals were present (reporting of IONM baseline data is recommended in Table 2 #20, which is good). What about in the case of intracranial surgery and the concern for stimulation occurring below the area where ischemia could occur (potentially leading to false negatives)?
-I do not see the supplementary material currently noted in the document on Medrxiv for review, so I have not reviewed this supplementary content.
-Minor typographical/grammatical errors noted by me have been directly submitted to one of the working group members.
Sincerely,<br /> Laura Hemmer, M.D.
On 2022-02-03 14:59:58, user Estefania Galvis wrote:
Where can I find supplemetary tables 11-13?
On 2022-02-04 12:27:17, user disqus_q5anFFpp6R wrote:
So wouldn't it be interesting to check neutralising IgG levels at a timepoint when IgG response is fully on. Unvaccinated were checked 12-17 days post testing. Isn't that a litte early?
On 2022-02-10 08:16:31, user Leonie Heron wrote:
Please note that the previous version of this living systematic review on medRxiv can be found here: https://doi.org/10.1101/202...
On 2022-02-10 09:12:52, user Alban Ylli wrote:
This article should be updated to match the same article published in BMC Public Health. Several results are (slightly) different
On 2022-02-15 20:38:21, user Outletshopping Bym wrote:
hello please how did you collected the data from social media?
On 2021-12-01 09:55:15, user Sven Franke wrote:
Interesting study. Even as a pro vaccine person I can't begin to name all of the assumptions made here that are probably highly faulty. It would take to long. I will leave it up to other researchers with different financial backers. Btw. if the results were right, they should let the german RKI know. RkI revised their statement about the role of vacinated ppl in the epidemic about 4 weeks ago, stating that they do contribute after all. Also the trend is showing more and more infections among the vaxed. Ignoring this does no one justice, if we are to fight the pandemic together. Also where do the authors get the notion of this "socializing of vaxxed ppl mainly with ppl of their own vax status". In what science is this assumption grounded? It sounds very far fetched and frankly quite bigoted.
On 2021-12-19 11:46:40, user Kjell Krüger wrote:
Tables and figures in the study point out that some 50% of the selection have status "unv." <br /> and "not born in Norway". Statistics from the study also marks out that some 80% of the total selection comes from the South-East region of Norway. Finally some 35% of unv. are marked with virusvariant "unknown", which we may suppose is other than omicron, as the study was done in the period up to october? It could be of interest to se some more deviation analyzes made on these parameters. Amount of beds i Norwegian hospitals are stated by SSB to be some 11500 beds, of which now some 400 are occupied with cov patients. I suppose all these parameters also should be interesting indata for future planning for how to manage future epidemic crises in Norway. Maybe new studies also will highlight possibilities that some regions should be set up with more capacity and competence than others, with the possibility to also transport both personell and patients between regions? I think questions and answers on these matters will be of big interest for politicians in both locally, regionally and nationally area one day when this crisis fade out - and preparation for the next one begins.
On 2022-01-08 17:26:38, user Jay Haynes wrote:
After considering that omicron is in the wild and is proving to be much more transmissible, the recommendation here is too little too late.<br /> Furthermore, the implications of nasopharyngeal samples with culturable virus being increased among the un-vaccinated needs to be weighed against the findings of this study:<br /> https://www.medrxiv.org/con...
What were the culturable virus levels among the un-vaccinated convalescent?
On 2022-01-12 16:14:56, user Daniel Ward wrote:
The one step kit from New England Biolabs used in this work is now available as a commercial kit:
E1555 LunaScript Multiplex One-Step RT-PCR Kit
On 2022-01-14 15:44:35, user Jordan Taylor wrote:
There are at least three major issues in this paper.
The first is a technical issue and probably the most obviously fatal flaw. It was first pointed out (that I could see) by Shih-Hao Yeh, and I think deserves re-emphasising. The authors have badly miscalculated the COVID19 hospitalisation risks for children conditional on comorbid status. They cite a 120 day hospitalisation rate (during moderate viral prevalence) of 255/million children. They note that the hospitalisation risk is 4.7-fold higher for children with comorbidities than for those without. How do we calculate the risk for each subgroup then? In this case, we are told 70% of those hospitalised have commorbidities and 30% do not, so for each 255 hospitalised, 0.3*255 = 76.5 will be healthy and 178.5 will have commorbidities. We can't stop there though as we need to adjust for the size of the background healthy and comorbid populations, which the authors tell us is 67% and 33% respectively. To get rates per million we have 76.5/0.67 = 114.2 among the healthy and 178.5/0.33 = 540.9 among those with comorbidities. They seem to have come up with the 44.4/million and 210.5/million figures based on the assumption that the two have to sum to 255/million, which is just not how it works at all. A basic sanity check should have been "should the risk in the high risk group really be lower than the overall risk?" If the rate of gun deaths is 100/million in the military and 1/million in civilians, you don't just add them together to get an average population gun death rate of 101/million!
The second is more conceptual. The comparison is between a risk from vaccination conditional on vaccination with a risk from infection which is not conditional on infection. In other words, the paper does not answer the question: what is the myocarditis risk of a vaccinated child versus the hospitalisation risk of an infected child? Instead it rather tries to answer the question: what is myocarditis risk of a vaccinated child versus the COVID hospitalisation risk for an average child over a 4 month period at X prevalence rate. Given that the pandemic has already been going on for 2 years now and shows no signs that it will simply disappear, this strikes me as an inappropriate choice.
Another issue is how the claims in the conclusions compare to the data. The authors state that for "boys 12-17 without medical comorbidities, the likelihood of post vaccination dose two CAE is 162.2 and 94.0/million respectively..." (emphasis added) and that this "exceeds their expected 120-day COVID-19 hospitalization rate at both moderate (August 21, 2021 rates) and high COVID-19 hospitalization incidence." However, at no point in the text, data, methods, or the Study Profile supplement (S2) of the paper is a stratification of CAE rates by comorbid status provided. In fact, although they did not provide any code, I was broadly able to reproduce their VAERS myocarditis counts in 12-17 year olds from 01/01/2021-06/18/2021 using the criteria they provide*, and that was without adjusting for comorbidities. So it seems likely to me that they have not separately analyzed "healthy" and comorbid patients and thus have no basis for making claims about effects of vaccination on "boys without medical comorbidities".
If as seems likely they have compared CAE rates among both healthy and unhealthy children with COVID19 hospitalisation rates among only healthy children, this is a big problem. The authors should be challenged to stratify their analysis equally for both COVID19 hospitalisation and vaccination before this paper is actually published.
*I found 277 cases matched the original criteria, however, this was with a number of likely duplications and possibly other data entry erros; as mentioned, the authors' methods were pretty inadequate so we have no way to see whether or how they have cleaned the data.
On 2021-10-06 17:34:27, user zega wrote:
Problem in the study is you include only "patients" and work from there, asymptomatic rate for 19yo can be found in https://www.nejm.org/doi/fu... and is 94-90% over time where it settles@90% cumulative and positivity rate 0.8%/1wk 2.1%/wk2 2.7%/wk3/-cumulative where rate of change slows and flatline. Vaccine is ment to be administered to everyone, in that case harm proportion difference is vast, so in case you closely monitor infection you will catch way more cases that changes the harm ratio outcome.
On 2022-01-17 04:02:45, user ChadEnglish wrote:
Edit: I've talked to the author so I may not need help as below request. But I'm still interest if anybody has additional suggestions.
-- ORIGINAL --<br /> Hi all. I'm hoping somebody can help me address an inconsistency that I don't see that any comments have mentioned. I'm trying to build a risk model for policy but I'm getting the reverse result from this paper when applied to a full risk model.
As I look at it, the problem appears to be that this study compares the risk of myocarditis for people who have already tested positive for COVID-19 versus the risk of people after vaccination. The summary conclusion in the abstract makes the statement, "Young males infected with the virus are up 6 times more likely to develop myocarditis as those who have received the vaccine."
This seems correct and consistent. But the discussion in the full paper makes the statement, "Whether considering all the risks and benefits of COVID-19 vaccination or just myocarditis, vaccination appears to be the safer choice for 12-19-year-old males and females."
That statement doesn't appear to follow from the analysis. The analysis is comparing conditional probabilities with different conditionals. The comparison only applies to people who already have COVID-19 infection. The latter statement isn't comparing conditional probabilities between COVID-19 versus vaccination, but total myocarditis risks between vaccinated and unvaccinated status. To get that, you would have to multiply the conditional COVID-19 myocarditis risk by the risk of acquiring COVID-19 infection for the age range.
[Edit: I understand it now as being a comparison of vaccination vs intentional COVID-19, so the abstract is indeed correct and the later statement may be somewhat correct if the context is understood.]
The model I'm using is the total risk from myocarditis should be:<br /> P(M) = P0 + P(V)·P(M|V) + P(C)·P(M|C)
Here P0 is the background risk. P(V) is the probability of being vaccinated which is 1 if vaccinated and 0 if unvaccinated. P(M|V) is the conditional probability of getting myocarditis from the vaccines, which this paper investigates. P(C) is the risk of getting COVID-19 for the age range and vaccination status. P(M|C) is the conditional probability of getting myocarditis given that you have already tested positive for COVID-19, which is also what the paper investigates.
I don't have good estimate yet for the risk of getting COVID-19 for this age range, gender, and vaccination status, but the rough proxy estimates I have are that the risk of getting COVID-19 if you are unvaccinated is about 3.15% per year average, and for vaccinated I have about 0.35% per year. These values will also vary with outbreak waves, variants, age range, which vaccines, and other factors.
[Note these estimates are from Canadian data, so are quite different in the U.S. and other jurisdictions.]
The risk of getting COVID-19 is, of course, time variable. To be consistent with the study these can scale to 90 days as P(C) = 7.7e-3 for unvaccinated and P(C) = 8.6e-4 for vaccinated.
The estimates from the paper for 12-17 year old males is P(M|V) = ~67 per million (6.7e-5) chance of myocarditis from vaccination and P(M|C) = 450 per million (4.5e-4) from COVID-19.
Plugging these into the above myocarditis risk for unvaccinated and vaccinated cases, and subtracting the background risk to compare the increased risk for unvaccinated and vaccinated cases, gives:
P(M,U) - P0 = (0)(6.7e-5) + (7.7e-3)(4.5e-4) = 3.46e-6 = 3.5 per million per 90 days<br /> P(M,V) - P0 = (1)(6.7e-5) + (8.6e-4)(4.5e-4) = 6.74e-5 = 67.4 per million per 90 days
The odds ratio then becomes 68/3.5 = 19.2. This suggests that given the option of getting vaccinated or remaining unvaccinated for the next 90 days, and you don't currently have COVID-19, your risk of myocarditis from vaccination is almost 20 times higher than taking a chance on not getting COVID-19 over those 90 days. This makes intuitive sense because the risk of getting COVID-19 is low.
This odds ratio will change over time as your chances of catching COVID-19 increase over time. Using the above assumptions, the risks are equal after 5.3 years of being unvaccinated, which of course the assumptions of the model will be long out of date.
I can't get a case here in which vaccination appears to be the better option for myocarditis overall. If you don't have COVID-19, it appears better to try to avoid both vaccination and COVID-19. If you do have COVID-19, your paper applies but then it is too, but by then it is trivia and can't be used for a decision. It appears to be more an issue of risk tolerance than actual risk calculation.
It's possible the risks of getting COVID-19 are much higher for the age range and conditions here, which would shorten the cross-over point and reduce the odds ratio, but to get a conclusion that vaccination is the better option for myocarditis in any reasonable timeframe seems to require unreasonably high probabilities of catching COVID-19. (Maybe omicron numbers will do it, but then the risks of myocarditis could be quite different.)
(Please note that I'm speaking here only for myocarditis. For general risks of hospitalization and death, the relative risks are quite different so I'm not suggesting this should be the dominant deciding factor.) [In fact, any decision should include all-cause analysis, as the author mentioned to me too.]
Does anybody see any significant flaws in my model here, or note where I've misinterpreted this paper? Thanks for any feedback.
On 2022-01-16 19:23:54, user Sam Lord wrote:
For Figure 4, I would suggest a histogram or violin plot to better show the zeros. Also, consider making the y axis linear instead of log, or at least display the units in real not log10 (and make the tick marks logarithmic). As you can see from another comment here, readers may not pick up on the log scale and fail to see a difference.
On 2022-01-20 04:04:11, user Andrew David wrote:
How can it be 100% effective against Delta for hospitalization or death with a confidence interval reported as 95% CI: 43.3-99.8 ?<br /> See results in the abstract.<br /> How can an estimate lie outside the confidence interval? <br /> I wish 100% could be true, but wishing doesn’t make it so. Or have math and statistics changed on account of Covid?<br /> That’s just plain sloppy.
On 2022-01-21 13:17:53, user Frank wrote:
will this study ever complete peer review?
On 2022-01-24 04:45:33, user Isaac Núñez wrote:
This manuscript has been published at Revista de Investigación Clínica, which can be found at the following link: https://www.clinicalandtran... .
On 2022-01-26 19:05:09, user Charles R. Twardy wrote:
Looks like this was published in Mayo Clinic Proceedings.
On 2022-02-08 07:57:52, user kdrl nakle wrote:
Pretty much all as expected. Better multivariate analysis is still needed, time periods since vaccination for example have to be included.
On 2022-02-14 01:00:56, user kdrl nakle wrote:
This is not a science but a speculation posing as a science..
On 2022-02-17 17:44:28, user BrianB wrote:
The Cosinor-based model used for season variation would be defined differently for those with deficiency versus sufficient concentrations. Also, after fitting the model to predict future concentrations, subjects may need to be reclassified into a different group (e.g., a sufficent subject with a sample taken in a bright period may be modeled as deficient in a dark period, and vice versa.) This was not indicated as being done, but should have been. Aside from that the Cosinor-based model is a rough model that has not shown to be consistent in predicting concentrations across populations. https://www.nature.com/arti...
On 2022-02-23 18:22:41, user Kevin J. Black wrote:
Revision published at J R Soc Interface, https://royalsocietypublish...
On 2022-03-12 11:17:49, user Scott V. Nguyen, PhD wrote:
I am copying in the discussion from GitHub as the discussion is best here. Additionally, in GitHub, these comments are only visible to users who are signed in.
https://github.com/cov-line...
Thanks, in addition to the contributions from other folks here, I also want to point out an overlooked contribution that @corneliusroemer identified two recombination breakpoints which I think is wild!
According to this preprint, I guess these authors might not have discovered this potential recombinant independently?
I'll be more direct here. I don't think the authors of the preprint found it independently as the lead author is here in this thread (@PhilippeColson). What sits uneasily with me is that @Simon-LoriereLab and his colleagues checked the raw sequencing reads and kindly shared the raw fastq files publicly in GISAID over 3 weeks ago (~17 February, 2022). I did some digging and found that Santé Publique France and Institut Pasteur put out a statement on how they are monitoring it and how the EMERGEN consortium is working to characterize the recombinant: https://www.santepubliquefr... (publication dated 23/02/2022). I suspect @Simon-LoriereLab was the one who sent the alert to French efforts for increased surveillance of this recombinant.
This preprint has spread like wildfire through the news, such as this one from Reuters: https://www.reuters.com/bus...
This puts a chill in open ended efforts in public sequencing databases and open collaboration, especially as laboratories in Institut Pasteur put in the work to confirm the recombination. By rushing out a preprint to be "first" and using names like "Deltacron" or "Deltamicron", these unconventional names have stirred up a hornet's nest of conspiracy theories on social media. For example, I've seen claims that this validates the "Deltacron" from Cyprus that was the result of contamination or a conspiracy theory to detract attention from the current situation in Ukraine.
It isn't difficult to communicate any of this to any of the contributors here, especially since this is a public discussion. This highlights the fragility of trust in science and confidence from the public. By nature, I am not a confrontational person but this issue is worth pointing out. While this discourse is unrelated to what this repo is for (after all, we are here to identify and monitor any potential emerging lineages), I do think it is pertinent to discuss on what is done in a public forum.
The NY Times also discusses this issue: https://www.nytimes.com/202...
On 2022-03-21 03:28:16, user Jameson wrote:
The results section says it was NOT detectable after 12 days, the image seems to show it was not detected after 6 days, but the abstract says it WAS detected between 12-14 days. The image of this part of the data just has some ascii * characters indicating where the protein is absent but should have been observed. Could the authors clarify this?
On 2022-04-14 07:45:08, user Ross wrote:
How was first time infection defined? If it is purely based on a reported infection from previous testing this would appear to be a major confounding factor. Assuming that an unidentified asymptomatic or mild case of Delta provides moderate or better protection against subsequent Delta infection and symptoms, this may increase the chances that this is a first time delta infection and hence may bias it to being more severe than if it was a reinfection. By contrast if the same infection provides only limited protection against omicron infection but reduces symptom severity - as the vaccines designed for earlier variants are reported to do with omicron - this could be a 2nd covid infection, with the first being delta and now this one omicron. Hence it appears to be biased to being less likely to be a first covid infection for the omicron group than the delta group despite the Delta2 group added in this study.
On 2022-05-10 01:01:58, user Joe Max wrote:
Dear Paglino et al<br /> very nice study but the phrase "mortality disadvantage" is totally opaque to the non specialist (like me) I have no idea what this phrase means; surely you can do better !!
On 2022-06-02 18:37:12, user Nils Yang wrote:
This paper was accepted at Lancet Child & Adolescent Health.
On 2022-06-28 04:59:17, user Mike Rogers wrote:
Congratulations on this very interesting study, which supports the evidence from other observational and preclinical studies that bisphosphonate therapy has a protective effect against respiratory infections. Regarding the mechanism by which bisphosphonates may confer this protection, we are surprised that you did not mention our paper published late last year in eLife, which demonstrates that the bisphosphonate zoledronate directly targets alveolar macrophages in the lung, inhibits the mevalonate pathway in these cells and boosts immune responses in vivo in mice. In our paper (Munoz et al 2021,Bisphosphonates have actions in the lung and inhibit the mevalonate pathway in alveolar macrophages. eLife 10:e72430, doi.org/10.7554/eLife.72430) "doi.org/10.7554/eLife.72430)") we suggest multiple routes by which inhibition of the mevalonate pathway in alveolar macrophages may confer beneficial effects against lung pathogens, including viral infections and SARS-CoV-2.
On 2021-11-18 14:20:47, user davehor1 wrote:
Table S6 Supplement looks to be missing, would be interested in seeing how you determine the total effect of vaccination on transmission.
On 2021-11-23 13:37:54, user FailedPolitics wrote:
So what are politicians waiting for? Why is the common D3 deficiency in elder care homes still normal?
On 2022-08-29 19:04:05, user Vivek Verma wrote:
Abstract says "100mg of LSD"; is it true or a typo?
On 2022-09-23 15:15:43, user Yu Li wrote:
It is important to regularly check the primers and probe sequences of a PCR or qPCR assay against GenBank because newly generated sequences may cause erosions or failures of a published assay. The article Wide mismatches in the sequences of primers and probes for Monkeypox virus diagnostic assays | medRxiv attempted the in silico analysis of published monkeypox virus (MPXV) specific qPCR assays. However, the article contains numerous errors in its results, lacks experimental data to support its conclusions, and can impair the 2022 monkeypox outbreak response.
The genome sequences of monkeypox virus (MPXV) are highly similar (~95% identical) to that of other species of orthopoxviruses (OPXV). The similarities between MPXV clade I and clade II are over 99%. Therefore, identifying a qPCR targeting site for primer and probe design that perfectly matches MPXV and contains enough sequence differences to differentiate other OPXV can be very challenging. The probe sequence of a qPCR assay is often given priority for target selection in assay development. Multiple studies have reported that PCR primer mismatches do not necessarily affect performance of a PCR assay. For example, Kwok S et al (1) and Christopherson C et al (2) showed that up to 4 mismatches in the primer-template duplexes (28 and 30 base primers) did not have a significant effect on RT-PCR (the sequence similarity is as low as 80%). The mismatch positions and type of nucleotides involved in the mismatch play important roles. The buffer and annealing temperature used in a PCR assay can also be critical in determining the assay’s performance. A single base mismatch in the reverse primer of the Orthopoxvirus generic OPX3 assay led to a 100-fold decrease of the sensitivity of this assay in detecting the 2022 monkeypox outbreak predominate strain (clade IIb, lineage B.1) in one buffer (3) but switching to a different PCR buffer nearly reversed this lost sensitivity. This example highlights the critical nature of performing laboratory validation testing to ensure specificity and sensitivity. The published MPXV qPCR assays have largely been validated by inclusivity and exclusivity panels (4), and the MPXV_G2R generic assay has been used extensively without sensitivity issues in detecting different clades of MPXV. This article made claims that “Our results show that the current MPV real-time generic assay may be unsuitable to accurately detect MPV” without any supporting experimental data. In addition, the title of the article is misleading without supporting data and can lead to uncertainty surrounding MPXV diagnostics.
The authors performed sequence similarity analysis of 8 published MPXV qPCR assays, including three CDC qPCR assays specifically designed to detect all MPXV isolates (generic assay), only clade I isolates (MPXV clade I assay) and only clade II isolates (MPXV clade II assay). In Figure 1, the detailed sequences alignment of MPXV generic assay MPXV_G2R were presented relative to the sequence of MPXV clade I. The authors showed two sets of primers; one set of primers, MPV-F-mu/MPV-R-mu, perfectly matches with MPXV clade IIb, lineage B.1 and contain a single mismatch for both the forward and reverse primers compared to originally published primer sequences. The MPXV_G2R generic assay was designed to detect both monkeypox clade I and clade II (4), and the primer sequences were designed using the MPXV clade I sequence. The publication of the MPXV G2R generic assay showed that this assay detects both clade I and clade II of MPXV (4). The MPXV G2R generic assay has been used for MPXV diagnostics since its publication in our laboratory and demonstrates no differences in the sensitivity of detecting MPXV clade I and clade II. Clinical diagnostic data confirmed that the limited primer mismatches have little effect on the performance of the MPXV_G2R generic assay under current protocols.
In Figure 2 panel A, the authors claimed that the MPV_G2R_WA-P, the probe sequence of MPXV clade II specific assay, contains the Mutation1 sequences, which are in 4.2% of 683 MPXV genome sequences the authors have included in their analysis. However, there are no genome sequences from MPXV clade II containing the Mutation1 sequences by the BLAST analysis of GenBank database. It is likely that the authors mistakenly used the sequences from MPXV clade I (MPXV Congo basin) as the Mutation1 sequences of clade II (West Africa clades). MPV_G2R_WA-P was designed to specifically detect MPXV clade II; the probe targeting sequences contain a 3 base deletion compared to clade I. <br /> If the authors have sequence data supporting their claims of genome sequences of MPXV clade II containing the Mutation1 sequences, they should make these available for others to analyze.
We are deeply concerned about the errors in this article and the lack of experimental data to support the authors’ conclusions. The authors should promptly address the issues raised here and consider the potential negative impact of this article on the MPXV diagnostics in 2022 monkeypox outbreak responses.
References<br /> 1. Kwok S, Kellogg DE, McKinney N, Spasic D, Goda L, Levenson C, Sninsky JJ. Effects of primer-template mismatches on the polymerase chain reaction: human immunodeficiency virus type 1 model studies. Nucleic Acids Res. 1990 Feb 25;18(4):999-1005. doi: 10.1093/nar/18.4.999. PMID: 2179874; PMCID: PMC330356.<br /> 2. Cindy Christopherson, John Sninsky, Shirley Kwok, The Effects of Internal Primer-Template Mismatches on RT-PCR: HIV-1 Model Studies, Nucleic Acids Research, Volume 25, Issue 3, 1 February 1997, Pages 654–658, https://doi.org/10.1093/nar...<br /> 3. Crystal M. Gigante, Bette Korber, MatthewH. Seabolt, Kimberly Wilkins, Whitni Davidson, Agam K. Rao, Hui Zhao, Christine M. Hughes, Faisal Minhaj, Michelle A. Waltenburg, James Theiler, Sandra Smole, GlenR. Gallagher, David Blythe, Robert Myers, Joann Schulte, Joey Stringer, Philip Lee, Rafael M. Mendoza, LaToya A. Griffin-Thomas, Jenny Crain, Jade Murray, Annette Atkinson, AnthonyH. Gonzalez, June Nash, Dhwani Batra, Inger Damon, Jennifer McQuiston, Christina L. Hutson, Andrea M. McCollum, Yu Li. Multiple lineages of Monkeypox virus detected in the United States, 2021- 2022 bioRxiv 2022.06.10.495526; doi: https://doi.org/10.1101/202...<br /> 4. Li Y, Zhao H, Wilkins K, Hughes C, Damon IK. Real-time PCR assays for the specific detection of monkeypox virus West African and Congo Basin strain DNA. J Virol Methods. 2010 Oct;169(1):223-7. doi: 10.1016/j.jviromet.2010.07.012. Epub 2010 Jul 17. PMID: 20643162
On 2022-12-12 15:51:53, user Koen van de Wetering wrote:
Dear Wera,
It is only now that I find out about your comment. I appreciate your input. Our manuscript has now been peer reviewed and was recently published in Analytical and Bioanalytical Chemistry. In the future I will more often check our preprints and incorporate comments like yours into our manuscripts. In case you still have questions about the assay to detect pyrophosphate, do not hesitate to contact me directly via email.
With kind regards,<br /> Koen van de Wetering
On 2022-12-15 10:49:12, user Author wrote:
We would like to reply to a comment entitled “Japan preprint on myocarditis used inadequate methods to suggest COVID-19 vaccines cause more myocarditis deaths”: a review by Health Feedback (Editor: Ms. Flora Teoh). <br /> https://healthfeedback.org/...
We thank them for commenting on our paper. We understand their main points of criticism were three summarised as followings:
Their 2nd point is based on the fundamental misunderstanding on the methods of our study. They erroneously stated "The authors' association of change in the risk of myocarditis death associated with COVID-19 vaccines was based on comparing pre-pandemic and post-pandemic rates of myocarditis death". <br /> We compared myocarditis mortality in the SARS-CoV-2 VACCINATED population with that of the 2017-2019 (pre-pandemic period: reference) population; we did NOT compare myocarditis mortality between POST-PANDEMIC and pre-pandemic periods.<br /> Because of the misunderstanding the fundamental methods of our study, the following criticism have no sense:<br /> “But this assumes that the only thing that changed between the two periods is the availability of the COVID-19 vaccines. It excludes, without justification, the possibility that COVID-19 itself could produce an increase in myocarditis deaths. No reason was given by the authors for excluding COVID-19 as a potential explanation, despite the fact that COVID-19 is a more likely explanation than COVID-19 vaccines for an increase.” “This is because we know—based on previous published studies—that COVID-19 is more likely to lead to cardiac complications than the vaccines [1,2]. Therefore, the alleged causal association rests on the assumption that only COVID-19 vaccines can explain the change in myocarditis mortality, which isn’t true.”<br /> However, we would like to comments on “COVID-19 is more likely to lead to cardiac complications than the vaccines” referring reports by Block et al [1], and Patone et al [2,3].<br /> It is important to consider following three points; vaccines are not given to dying persons and to persons with fever or other acute diseases. Hence vaccinated people are relatively healthier than the non-vaccinated (healthy vaccinee effect) [4]. Conversely, vulnerable persons (frail, suppressed immunity due to stress or sleep debt etc) are more likely to be infected with SARS-CoV-2 (vulnerability confounding bias: VCB) [5].
Patone et al. [1] stated in the discussion section as follows: “Of note, the estimated IRRs were consistently <1 in the pre-exposure period before vaccination. ---- This was expected because events are unlikely to happen shortly before vaccination (relatively healthy people are receiving the vaccine).” This is exactly the same as the healthy vaccinee effect [4] and it is the lowest at day 0 of vaccination [2]: for example, IRR of arrhythmia at day 0 of BNT162b2 vaccination was 0.33 (0.29 to 0.37) compared with 0.72(0.70 to 0.73) during -28 to -1 days before vaccination [2]. <br /> Paton et al [1] also discussed that the estimated IRRs were consistently >1 in the pre-risk period before a SARS-CoV-2–positive test. They thought that events are more likely to happen before a SARS-CoV-2–positive test (as a standard procedure, patients admitted to the hospital are tested for SARS-CoV-2). But they missed to discuss that IRRs on day 0 of vaccination are the most prominent (with 10 times more than that in the pre-risk period, because standard testing of SARS-CoV-2 is mostly done on the day of admission). Hence, constant IRR >1 during -28 to -1 days before vaccination may be another cause. It may be explained by the vulnerability confounding bias [5].<br /> We estimated the effect of vulnerable person’s susceptibility to infection (vulnerability confounding bias: VCB) from the pre-risk period (-28 to -1 days) of the SARS-CoV-2 test-positive group: 2.84 (1.89 to 4.28) for myocarditis and 4.82 (4.68 to 4.97) for arrhythmia. When applied these data for the index of VCB, VCB-adjusted IRRs are 3.44 (2.11 to 5.59) and 1.11 (1.07 to 1.16) which are similar to or less than the healthy vaccinee effect adjusted IRRs of myocarditis (3.97: 3.05 to 5.16) and arrythmia (2.70: 2.38 to 3.05) respectively [4].<br /> It is not possible to estimate the healthy-vaccinee effect and VCB directly from the report of Block et al [3], however, post-SARS-CoV-2 infection/post-vaccination myocarditis risk ratios may be less than 1.00 in almost half of those listed when above adjustments were applied.
This point is also derived from the fundamental misunderstanding on the methods of our study. We did NOT compare myocarditis mortality between POST-PANDEMIC and pre-pandemic periods BUT compared SARS-CoV-2 VACCINATED population for 28 DAYS after vaccination with pre-pandemic periods. <br /> Therefore, as a rule, deaths following SARS-CoV-2 infection were not included in this study. In fact, none had COVID-19 listed in the death cause column of cases included in this analysis.<br /> Moreover, in the MHWL list we referred; most deaths included brief medical history as well as the cause of death. We clearly stated that “these were myocarditis death cases reported by physicians as serious adverse reactions to the vaccine” in the Methods section.<br /> Furthermore, as we stated in the discussion section, myocarditis deaths in the 2017-2019 (reference) population were also based on a doctor's diagnosis, with no other medical history known. Mevorach et al [6] also analysed using the same methodology and already published as a peer reviewed paper.
This point is also derived from the fundamental misunderstanding on the methods of our study. We compared SARS-CoV-2 VACCINATED population for 28 DAYS after vaccination with pre-pandemic periods. Hence this sample size was enough to demonstrate increased myocarditis mortality rate ratio after vaccination.<br /> As we stated in the end of the discussion section and in supplemental Table S6, all of the Modified US Surgeon General criteria for causal were satisfied.
Sincerely,<br /> Watanabe and Hama.
References<br /> [1] Block JP, Boehmer TK, Forrest CB, et al. Cardiac Complications After SARS-CoV-2 Infection and mRNA COVID-19 Vaccination - PCORnet, United States, January 2021-January 2022. MMWR Morb Mortal Wkly Rep 2022; 71:517-23. DOI: http://dx.doi.org/10.15585/...<br /> [2] Patone M, Mei XW, Handunnetthi L, et al. Risk of Myocarditis After Sequential Doses of COVID-19 Vaccine and SARS-CoV-2 Infection by Age and Sex. Circulation. 2022; 146(10):743-54. doi:10.1161/CIRCULATIONAHA.122.059970<br /> [3] Patone M, Mei XW, Handunnetthi L, et al. Risks of myocarditis, pericarditis, and cardiac arrhythmias associated with COVID-19 vaccination or SARS-CoV-2 infection. Nat Med. 2022; 28(2):410-22. doi:10.1038/s41591-021-01630-0<br /> [4] Hama R and Watanabe S. The risk of vaccination may be higher by considering “healthy vaccinee effect” Response to Husby et al: https://doi.org/10.1136/bmj... (Published 16 December 2021)<br /> Available at: https://www.bmj.com/content...<br /> (Accessed 30 November 2022)<br /> [5] Hama R and Watanabe S. Vulnerability confounding bias should be taken into account in assessing risk of post SARS-CoV-2 infection: an opposite concept of healthy-vaccinee effect (Under submission)<br /> [6] Mevorach D, Anis E, Cedar N, et al. Myocarditis after BNT162b2 mRNA Vaccine against Covid-19 in Israel. N Engl J Med. 2021; 385(23):2140-49. doi:10.1056/NEJMoa2109730
On 2020-12-01 03:41:09, user Padmaksha Roy wrote:
Hello authors, just curious to know if the github repo for this paper can be made publicly available. Currently the link mentioned does not have the code. Thanks!
On 2020-12-04 19:07:47, user lbaustin wrote:
Has this been submitted for publication yet?
On 2023-09-14 09:01:23, user Chris Iddon wrote:
CO2 measurements are taken for one occupied day in November 2022 and compared to PCR positive cases during the period Aug21 to Aug22. There is no data presented here to suggest that the single day CO2 reading is representative of the room ventilation during the whole period of Aug21 to Aug22. Also there doesn't appear to be any record of how many of the occupants were PCR positive prior to Aug21 and therefore have some level of prior immunity, nor how often the occupants are tested by PCR.
On 2020-12-28 04:33:58, user Igor Oscorbin wrote:
Interestingly, a strategy that has been commonly used to alter the capabilities of DNA polymerases, the addition of additional DNA- or RNA-binding domains, has yet to be applied to Bst DNAP.
It should be noted that the strategy has been applied at least once:<br /> https://academic.oup.com/na...<br /> Derivatives of Bst-like Gss-polymerase with improved processivity and inhibitor tolerance
On 2023-10-22 23:07:06, user CDSL JHSPH wrote:
Hello! Thank you for sharing your work with us. I believe that your work in identifying barriers of transitioning from acute care of substance use disorder (SUD) to community-based treatment is a big first step to making a change in providing impactful support to SUD patients. I wanted to start off with saying I think the title of the topic is well framed, it conceptualizes exactly what to expect in the paper including the research focus of transitions of SUD patients from acute-care settings to community-based setting, it also gives an insight to the methods and understanding that the paper will aim to categorize the strategies. There were a few comments and questions that I think may help the paper and my understanding of this paper.<br /> 1) The Abstract: I really like the breakdown structure of the abstract, it makes it easier to read. I do believe an extra line could be added to the background section of the abstract that indicates a direct connection of the research results to its direct use in the bigger issue. I think adding something like the sentence on Line 4, page 5 would help the reader make this connection. <br /> 2) Results and Figures: I felt as through a pie chart could be used to summarize a few things in this section. It would make it easier to read in a way and represent what portion if the category was taken from the whole picture. An example of this could be during the Additional IntervenntionC Components across Care Continuum. The Table is very helpful, but a graphic figure may help readers understand the results in a better way.<br /> 3) Discussion: The need for more literary review was repeated multiple times throughout the discussion and I was wondering if there was a way of indicating this limitation’s importance without the repetition of it. <br /> Overall, I really enjoyed reading this paper. It was well-written and easy to follow. I hope that this paper makes the effect it intends to, and I hope to follow up with future research in which these strategies, barriers and facilitators are put to the test. I think this is a great step to making a big difference in addiction medicine.
On 2020-08-09 16:36:55, user David Leidner wrote:
Posted article says that data are fully available, but no link to the data is provided.
On 2021-09-22 20:35:49, user tooearly wrote:
What we don't know:How long is this effect? Months? Many months?<br /> Is MMR the best choice for LAV non specific immune boost? Would OPV not work even better? more details about the endpoints and how they were measured
On 2020-07-15 14:57:36, user Rhyothemis wrote:
The PDF 'heat map' figure is illegible.
On 2022-01-08 23:20:51, user Joshua wrote:
From the study: “1. The negative estimates in the final period arguably suggest different behaviour and/or exposure patterns in the vaccinated and unvaccinated cohorts causing underestimation of the VE. 2. This was likely the result of Omicron spreading rapidly initially through single (super-spreading) events causing many infections among young, vaccinated individuals.”
Let’s discuss the sentence I labeled 1.
1a) Is any data available which supports the author(s) hypothesis that the vaccinated cohort engaged in riskier behavior when compared to the unvaccinated? My anecdotal evidence from my lived experience with those in my circle is that the unvaccinated are living a much riskier life as it pertains to covid infection. But don’t take my word because that is not how science works, instead, consider this KFF survey of 1,527 adults aged 18+ conducted in July 2021 indicating the opposite reality: “ Majorities of vaccinated adults say news of the variants has made them more likely to wear a mask in public (62%) or avoid large gatherings (61%), while fewer unvaccinated adults say the same (37% and 40%, respectively).”
1b) Is there any explanation why this alleged confounding variable of riskier behavior by the vaccinated did NOT appear during the studies surrounding delta?
1c) Is there any explanation why this alleged confounding variable of riskier behavior by the vaccinated only appeared during the 91-150 days time period for the omicron variant?
Let’s discuss the sentence I labeled 2.
2) I found this statement in the Methods section of this study: “VE was calculated as 1-HR with HR (hazard ratio) estimated in a Cox regression model adjusted for age, sex and geographical region, and using calendar time as the underlying time scale.” That means the authors accounted and controlled for age, yet they claim age as a confounding variable. Talk about having your cake and eating it too!
On 2021-02-06 06:46:05, user kdrl nakle wrote:
Expected but good to know it is confirmed.
On 2021-02-09 11:51:56, user Robert van Loo wrote:
I cannot find over which period the sera were collected. Including that would greatly increase the value of the paper.
On 2020-08-30 16:23:17, user Martijn Weterings wrote:
Figure 3 shows two remarkable effects:
Instead of fitting to current epidemiological curves we should determine the epidemiological parameters more directly based on detailed data (e.g. based on databases with contact tracing information and tracing the tree of infection rather than basing estimates on an aggregated data like total deaths and total infections per day, which are also not so accurate). Estimates based on real life face-to-face networks help, but may not be sufficient to fill in the gap of information about epidemiological parameters.
In addition, while the model explains very well the effect of the heterogeneity in rates of infection among different people, it is still not a realistic model and lacks the nuance of spatial distribution and network structures which will have a remarkable effect on the curves as well including an early decrease of the growth rate (due to local saturation, increase of immunity). The downside of the simplicity is that the heterogeneity may become overestimated (in order to compensate for the lack of the other effects) and the predictions of the percentage to reach immunity may be underestimated.
Possibly the model may work well for cities. However in many European countries we see already second waves occuring in mostly different regions and different populations (also Australia gives a clear second wave which is even much stronger). <br /> The 1st wave was like a forest fire that has been mostly local and got actively extinguished (thanks to measures like working at home and travel restrictions). We should not confuse the decline as all the dry old trees being gone and as if the risk of fires is over now or relatively low. Those 1st wave fires were local and there may still be many patches that are able to catch fire.
On 2021-10-09 10:12:17, user Nick Turnock wrote:
How does your modelling take into account the incubation period. For instance Delta's reported increased viral load without epitope change may indicate immune response evasion by shortening the incubation period. ie increased population seropositivity may have driven a mutation which enables Delta to rapidly multiply, shed and infect new hosts in the short window before memory B cells start churning out antibodies.
On 2020-09-07 23:07:20, user Louis Rossouw wrote:
In the case if South Africa:<br /> * The reported deaths are very much undercounted.<br /> * The economist excess deaths figure includes drops in accidental deaths and other things. Please have a look at https://www.samrc.ac.za/rep... that tries to adjust for moving parts.
On 2020-04-18 15:41:56, user Zev Waldman MD wrote:
I agree with other commenters that people who suspected prior Covid infection (or exposure) are more likely to seek antibody testing than those who did not. While participants were asked about prior symptoms, it is not clear what if anything was done with this information. It would also have been nice to ask about prior exposure concerns/risks, and report/use that information.
My other concern that has gone less discussed is their calculation of the case fatality rate. While they recognize that reported case numbers as of April 1 are an underestimate, it seems that they forget this skepticism when looking at reported deaths. They seem to take it as a given that 50 people died of Covid in the county as of April 10 as reported, and used this to project to deaths by April 22; however, like case counts, there are multiple reasons to suspect this number of deaths might be higher:
Reporting of deaths is well-known to be delayed - i.e., date of reporting does not equal date of death
People who actually died of Covid may never have been tested, and thus may not be included as cases or deaths
The doubling time of deaths used to project to April 22 is also based on reported deaths; if reporting of deaths is delayed, the doubling time may appear slower than it actually was.
If their death estimate due to illness before April 1 is too low, their corresponding CFR would be an underestimate as well. (This would be exacerbated if their case estimate is too high due to self-selection into the study, as seems possible.) At the very least, some sort of uncertainty around the death estimate should be provider, which in turn would increase the uncertainty around the final CFR.
I know CFR wasn't the main focus on the article, but worry that, because these results support their prior beliefs, some readers may take the results at face value and push them to policymakers before they have been more widely vetted by the scientific community.
On 2020-09-18 20:29:54, user David C. Norris, MD wrote:
This paper is fundamentally misconceived:
Biostatistically
This paper apparently arises out of the biostatistical perspective which presently dominates the design and analysis of dose-finding trials in oncology. Yet even by purely statistical standards, it suffers serious shortcomings. Most notably, it looks for an interaction (viz., dose-response) without first demonstrating or ensuring the existence of a main effect. Reference #153 in this paper (Hazim et al. 2020) reported a 5% median response rate in a systematic review of recent dose-finding trials. Would the authors venture to estimate what fraction of their 93 ‘analysis series’ employed a drug with a substantial therapeutic effect? Some indication might be found in what fraction of the treatments unequivocally demonstrated a therapeutic effect in subsequent phase 2 or 3 trials. Adashek et al. (2019) document a secular trend in overall response rate (ORR) observed in phase 1 trials which is “now almost 20%, or even higher (~42%) when a genomic biomarker is used for patient selection.”
Also arguably well within the purview of biostatistics would have been a decision-theoretic framing of phase 1 cancer trials. These trials may be understood as the earliest clinical steps in a learn-as-you-go (adaptive) drug-development process (Palmer 2002; Berry 2004). On such an understanding, aiming to treat early-phase participants at maximum tolerated doses (MTDs) in no way “dictates that an assumption is made … that higher doses are always more efficacious” (p. 4; italics in original). The authors’ use of “dictates” suggests they see something of logical necessity in this, and their further insertion of the logical quantifier “always” only exacerbates their overreach in formulating this central tenet of their study. Even the distinction between a logical assumption and a statistical prior gets lost in the shuffle. To remedy all this, the authors might consider attempting to state formally their understanding of the individual phase 1 trial participant’s decision-problem, complete with its essential uncertainties and some plausible utilities. (Within the community of investigators whom they address in the final paragraph of their Discussion, there is, I believe, broad agreement on the doctrine that these trials have therapeutic intent (Weber et al. 2016; Burris 2019). The authors would do well to take this patient-centered view as their starting point, as opposed to the dose-centered and unitary goal they proclaim at the end of their current Discussion.)
Furthermore, statistics is nothing if not a discipline for “mastering variation” (Senn 2016), and a paper that sets out to question the strict monotonicity of dose-efficacy ought also enquire as to the presence of inter-individual heterogeneity in dose-response. Note that such heterogeneity would tend to attenuate the maximum slope of a convex dose-response in aggregate.
Finally, the absence-of-evidence fallacy is widely appreciated among professional statisticians, yet seems to have been indulged liberally here without any safeguards such as are usually provided by power calculations.
Pharmacologically
Within statistics, there is a doctrine that statistical analysts should always engage ‘subject-matter experts’. But one sees in this paper no sign that any pharmacological concepts—let alone expertise—have been brought to bear on what would seem to be a pharmacological question. At a minimum, in any serious challenge to the ‘MTD heuristic’—as I have called it—one expects to find distinctions between on-target and off-target toxicities. In an analysis that invokes dose-response plateaus (whether these are conceived as approximate or absolute in this paper remains unclear), we ought to find discussion of receptor occupancy and saturation as underlying realistic mechanisms.
To some extent, a neglect of subject-matter knowledge may be embedded in the very form of the present analysis, which tries to deal with its question in aggregate (through statistical techniques such as standardization) rather than in its particulars.
Clinically
In the final paragraph of their Discussion, the authors proffer advice to clinical investigators. In light of the limitations—statistical, logical, subject-matter—catalogued above, this is premature and should be omitted. Any given phase 1 clinical investigator will be considering a candidate drug in its particulars, conditional on a great deal of preclinical data and perhaps even nontrivial PKPD and systems-pharmacology modeling. The authors acknowledge as much (p. 16), seeming to appreciate that they have conducted an unconditional analysis of highly conditioned decision-making. To investigators thus intimately engaged with pharmacologic particulars, the null conclusions from a marginal analysis such as this one can contribute little useful guidance. If it were proposed to submit this work for peer review in substantially its present form, only a statistical audience should be addressed—and then solely with a cautionary note that the finding of a dose-response interaction will not leap out at a statistician from a convenience sample of phase 1 studies in which a therapeutic main effect remains dubious and unexamined. The main lesson of this work is that statisticians ought to investigate questions of pharmacology in their particulars, and with recourse to subject-matter concepts and expertise.
References
Adashek, Jacob J., Patricia M. LoRusso, David S. Hong, and Razelle Kurzrock. 2019. “Phase I Trials as Valid Therapeutic Options for Patients with Cancer.” Nature Reviews Clinical Oncology, September. https://doi.org/10.1038/s41....
Berry, Donald A. 2004. “Bayesian Statistics and the Efficiency and Ethics of Clinical Trials.” Statistical Science 19 (1): 175–87. https://doi.org/10.1214/088....
Burris, Howard A. 2019. “Correcting the ASCO Position on Phase I Clinical Trials in Cancer.” Nature Reviews Clinical Oncology, December. https://doi.org/10.1038/s41....
Hazim, Antonious, Gordon Mills, Vinay Prasad, Alyson Haslam, and Emerson Y. Chen. 2020. “Relationship Between Response and Dose in Published, Contemporary Phase I Oncology Trials.” Journal of the National Comprehensive Cancer Network 18 (4): 428–33. https://doi.org/10.6004/jnc....
Palmer, C. R. 2002. “Ethics, Data-Dependent Designs, and the Strategy of Clinical Trials: Time to Start Learning-as-We-Go?” Statistical Methods in Medical Research 11 (5): 381–402. https://doi.org/10.1191/096....
Senn, Stephen. 2016. “Mastering Variation: Variance Components and Personalised Medicine.” Statistics in Medicine 35 (7): 966–77. https://doi.org/10.1002/sim....
Weber, Jeffrey S., Laura A. Levit, Peter C. Adamson, Suanna S. Bruinooge, Howard A. Burris, Michael A. Carducci, Adam P. Dicker, et al. 2016. “Reaffirming and Clarifying the American Society of Clinical Oncology’s Policy Statement on the Critical Role of Phase I Trials in Cancer Research and Treatment.” Journal of Clinical Oncology 35 (2): 139–40. https://doi.org/10.1200/JCO....
On 2021-10-25 17:06:17, user Lucy Carpenter wrote:
My first reaction is, why are you testing Nafamostat - a front-end, early stage antviral meant to block or inhibit TMPRSS-2 at the earliest entry and activation points for the virus - at the back-end, late-stage viral sequence of Covid19 pneumonia? By then, the host is typically so overrun with viral microbials that front-end antivirals will probably not make a measurable difference.
But that is not what the purpose is, for this particular component. Your methodology and assumptions, to me, appear flawed. For two reasons:
What makes the child's immunity different than adults or others? We all know this is a complex synergy - aminos, enzymes, different glyco protein structures, etc - but what stands out in immunity research is:<br /> A - children detect Covid much earlier, much earlier, broad-range viral pattern recognition, in the nasal passage (and throat, many children are still mouth breathers); <br /> B - children attack the virus much earlier - at the attachment and activation stage, well before viral replication. They recognize and start attacking the virus as soon as it enters their nose or throats; this limits the 'viral load' on their systems to a manageable level that is easily dispatched. Like stopping a hurricane when it just starts to form, off the coast of Africa; vs waiting for it to hit land in Florida. <br /> --- THIS IS WHY YOUR STUDY WITH NAFAMOSTAT APPEARS FLAWED: YOU ARE NOT MEASURING THE IMPACT OF NAFAMOSTAT ON STOPPING THE ENTRY OF COVID INTO THE HOST, AND LIMITING THE MICROBIAL LOAD BEFORE IT EVEN GETS TO REPLICATION STAGE -- WHAT THE REPURPOSING WAS DESIGNED TO ACHIEVE. <br /> Nafamostat was designed to be one of several elements attempting to emulate a child's Innate Immunity; and the child's very early blocking of ACE2 and TMPRSS-2 expression in the nasal and throat passages.
You are testing instead, the 'wrong' goal for this antiviral; the goal of Nafamostat was never to reduce inflammation or cytokine response -- if you want to test an antiviral for that, test a mega dose of Vitamin D IM or IV; or dexamethasone -- but to INHIBIT THE VIRAL ENTRY INTO THE HOST, BEFORE REPLICATION STAGE. By the time a human as Covid pneumonia, the microbial load is so extreme, it is time to shift gears to another type of antiviral response. (Concentrated D, btw, has excellent efficacy when combined with dexamethasone, against any viral or bacterial lung inflammatory response and infection.)
Bottom line: children have more, and better, general (broad-spectrum, not specialized) immunity and fighter cells in their nasal mucosal and this serves to support very early viral recognition -- broad-range, not specialized, as the vaccines wish to change this Innate Immunity in children - and viral attack mechanisms, from their natural, Innate Immune system, than adults. We cannot yet replicate this different concentration of Immunity cells in a child's nose, into a nose spray for adults. Much of it is genetic: God coded our systems so give children extra protection during their earliest years. *But that would be a good goal.<br /> _____<br /> So we lack a once-daily nasal spray for adults, which could coat our nasal passages with the same distribution and type of IFN-?2, IFN-?, IP-10, IL-8, and IL-1? protein and T cells and so on, encoded with extra-sensitive, broad-range viral recognition pattern recognition; that our children have. Until we get there, at least up-front, early stage broad-range antiviral components (not a complete pro-drug but a component of a complete antiviral) like nafamostat were intended or designed to 'emulate' specific 'functions' of what we know that children do naturally at the front end: 1 - recognize the virus and 2 - block it BEFORE replication stage by blocking or inhibiting Attraction, Attachment, and Activation.
So, did you test this singular component of a true, end-to-end broad-range antiviral therapeutic or cure, for Covid, at the right phase? Because Nafamostat again, is meant to inhibit the 3rd phase in viral development (activation); not to fight high microbial loads at the back end.
Conclusion: Nafamostat inhibited viral 'activation" and reduced microbial load by Nth %. <br /> If you do not fully understand the full life-cycle approach of the Innate Immune system against Covid, what is often termed the "molecular Covid-Host architecture," you will not be testing responses in the right way, at the right time, measuring the right results, etc.
And then you can go into all of your variants, temp differences (significant in the mutations of viral cells from corona to covid), ph, pre-existing conditions, by age, etc. *And, presumably this was all in silico and not human experimentation with very ill real people..<br /> __________
It is still not fully understood HOW children block these 3 critical phases of the Covid 19 lifecycle. And easily destroy it. (Other elements of the Innate Immunity, that my company has focused on, include destruction of the viral envelope - so that one is actually killing the virus, again before reproduction stage, not just 'fooling' it or sabotaging RNA, which opens the door to more and more deadly viral mutation or new strain development.) <br /> No one should have expected Nafamostat to have an 'anti-inflammatory' response anyway: what does cytokine or the proteins and genes expressed for that response, have to do with tmprss-2?
But it is so CRITICALLY IMPORTANT that researchers NOT DISCARD OR DISCREDIT VALUABLE ANTIVIRAL COMPONENTS, IN FAVOR OF THE VACCINES. OR FROM A PRO-VACCINE POINT OF VIEW. BECAUSE WE NEED FAST, AFFORDABLE TREATMENTS AND A CURE. You said that very well. And we need that new, fast, methodology in place for the next pathogen. But in my view, that methodology will mean embracing a component-based antiviral approach, and plug-and-play elements that tackle the sequential or concurrent viral lifecycle steps, either independently or in a cohesive therapeutic package. Much like we treat cancer. There are many drugs taken at once, typically. They each have different goals. We don't want to test for the wrong thing, and lose a valuable potential ally.
On 2021-07-29 07:51:21, user Portal Cedip wrote:
I am surprised that a country that was punished so badly by COVID-19, due to its nihilism, purely academic debates which misled the point even after recognizing that SARS-Cov-2, get the boy sick kills children and young people, but -you know Winston- their finest hour will not come until they get sick and die in numbers that do not even represent the TOTAL burden of the disease (just 6,340 boys, of which 700 got the PICU and oh, maybe 13 died, eventually more. Who cares? Just another non caucasic problem. Sense of safety for a far away condition that colonize, infects, make CYP get hospitalized, complicates 10% and kills with a lethality of 2% ONLY. I saw my pediatric unit got exhausted due to the large number of teleconferences with boys we could not hospitalize. The crisis was burning out or infecting our teams. We were under attack but the non-traslational sweatless sirs were complaining about us being hysterical and overplaying our hands with our small patients. And our government made the impossible. A country ranked 27 in Health Services got top 10 in number of cases and deaths / 100, 000. We did not see our boys dying in front of us. But got overwhelmed at all ages. Our 19 million people´s country got 130.000 CYP infected, Three thousand were hospitalized, half of them had a critical trajectory or came back from home with TIMPS. One hundred died. Eighteen had less than 1 yo. That´s crude data. Most of it occurred during the second wave, after we naivly thought we had gotten rid of the virus (Christmas 2020). But the virus gave itself a gift from England: the variant Delta, which seized the country for 4 additional months. Now is calmed again. You trust that it got surrended to vaccination, a plan that already involves more than 65% of the population. <br /> NO<br /> I do not.
My best wishes. With personal regards from the very south of the world,
Ricardo
On 2020-10-16 16:08:03, user COVIDscience wrote:
These data seem to contradict a previous study published in JCI by Yanqun Wang and colleagues, where increased IgG titers towards OC43 spike were associated with more severe disease outcome (https://doi.org/10.1172/JCI... "https://doi.org/10.1172/JCI138759)"). The time after infection at which the sera from the mild (outpatients) and severe (inpatients) cohorts were obtained is not specified in the study from Martin Dugas and colleagues. Potentially, these are not similar in the different patient groups.<br /> Given the complexly linked kinetics of antibody titers in COVID-19 patients towards SARS-CoV-2 and other coronaviruses (https://doi.org/10.1101/202... "https://doi.org/10.1101/2020.10.12.20211599)") this may change our perspective of these data.
On 2020-10-22 17:59:48, user Jeremy Rolls wrote:
I repeat below my comments on the earlier version but with an update on the numbers I referenced. London has continued recently to have much lower hospital deaths than its share of the population would suggest, even though if antibody data alone was used a judge of how many have been infected it would seem that 80+% were yet to be infected as against 90+% nationally. London has 15.95% of the population of England but since June 1st has only had 7.25% of the deaths. (The numbers were pretty consistent through June, July and August, rose in September but then haven fallen back in October mtd. I suspect the rise in September is because London was earlier than other regions in seeing the impact of whatever the reasons the general rise across Europe have been). This continues to tell me that antibody data is nor providing the full picture on whom has been infected or may have pre-existing immunity and that the low death numbers (both absolute and relative) in London are because the virus is naturally running out of people to infect.
A strategy of partial lockdown does not seem like a logical or proportionate response. Imperfect though it may be (but there is no perfect solution) the Gt Barrington concept of focused protection would seem a far more sensible way forward whilst allowing the rest of us to achieve herd immunity and get on with our lives.
Fascinating paper. Looking at the antibody data (such as there is any <br /> published here in the UK) about 18% of people in London have antibodies <br /> compared to about 8% nationally. On that basis alone 82% of Londoners <br /> may still get infected compared to 92% nationally - i.e. you would <br /> expect the mortality rate in London still to be pretty close to the <br /> national rate. Yet the hospital death stats for covid-19 in recent weeks<br /> shows London's rate consistently to be less than 40% of the national <br /> rate. Something else must, therefore, be going on - a) London is locking<br /> down better (unlikely), b) antibody immunity does not give the complete<br /> picture (possible given the data coming out of Sweden showing that for <br /> every person having antibodies two others have T-cell immunity) or c) <br /> there is a % of the population who have pre-existing resistance (from <br /> exposure to other corona-viruses) or are biologically incapable of <br /> getting infected. Ruling out a), a quick bit of maths shows about 75% of<br /> the population must fall into b) or c). So, on that basis, in London <br /> well over 90% have either been exposed to the virus or have pre-existing<br /> immunity and maybe 80-85% nationally. I suggest herd immunity has <br /> probably been achieved in London and is close in many other parts of the<br /> UK.
On 2020-10-26 13:59:51, user Chen Yanover wrote:
This preprint has been published in JMIR Public Health Surveillance here.
On 2020-10-28 18:00:00, user Tomas Hull wrote:
How is herd immunity tested?
On 2021-11-16 11:57:33, user disqus_aUdf6iYESf wrote:
This is an interesting study, and not an easy one to do. I congratulate the authors on their work.
I agree with the authors that the study is hypothesis generating.
A few questions/comments:
1) The authors describe no delay as being "score >2 SDs above the population mean". If no delay is the inverse of delay, I think this should be "a score higher than the cutoff for delay of 2 SD below population mean." A score >2 SD above population mean would include a very small proportion of children (about 2.5% in the population) of developmentally advanced children.
2) As the authors note, using a questionnaire (Age and Stages) by phone is not ideal for evaluation, and responses could be biased by parental knowledge of maternal SARS-CoV-2 infection.
3) The numbers with infection in the first trimester are small (only 5 children), but 4/5 (80%) had developmental delays, as compared to 6/20 in second trimester (30%), and 20/273 with infection in third trimester (7.3%). Those are striking differences, with a "dose-response" type pattern by trimester, but the numbers are small, so this study would need to be replicated by other groups, ideally with testing with the Bayley scales or other administered instrument.
4) A control group without SARS-CoV-2 infection would be important as an additional comparison group, and was not present. This would give a sense of whether in the population who responded and were assessed by phone questionnaire, the rate of developmental delay (score < mean - 2SD) was similar to that expected in the general population.
For all of these reasons, I think further studies are required to definitively state that maternal SARS-CoV-2 infection in the first or second trimester is associated with developmental delay, but this study provides preliminary data that this might be the case. It appears other studies in progress propose to prospectively address this question (e.g., PROUDEST study in Brazil), and such studies are required for a more definite answer as to whether SARS-CoV-2 infection early in pregnancy affects child neurodevelopment outcomes.
On 2020-11-09 00:43:51, user Antonio Gasparrini wrote:
The article is now published in the International Journal of Epidemiology.<br /> You can access the full-text for free at:<br /> https://academic.oup.com/ij...<br /> http://www.ag-myresearch.co...
On 2021-11-26 12:38:36, user Richard Hockey wrote:
It would be interesting to repeat this in other Australian cities that had very limited lockdown and very few Covid cases such as Brisbane or Perth.
On 2021-11-29 12:51:15, user Eleutherodactylus Sciagraphus wrote:
The ethical misconduct related to this work has also been covered by BMJ: https://www.bmj.com/content...
On 2021-11-29 13:12:04, user HarryT wrote:
Another research shows current vaccines induce excellent immunity against all variants, most likely include Omicron. Just get vaccinated.
On 2023-12-28 09:01:29, user Till Dembek wrote:
Dear colleagues,<br /> I congratulate you for conducting a well-planned study for investing the relationship between tremor and DRTT activation - and for allowing discussion of your<br /> results by publishing a preprint. However, I strongly discourage you from overinterpreting your “slope=1” relationship as prove of DRTT activation being causal for tremor suppression.
This absolute value of the slope is, in my opinion, meaningless and purely coincidental. For this to be a true finding with absolute meaning, all your underlying steps would also need to be true. Your lead localization would need to be true, the way you track and threshold the DRTT would need to be true, the way you identify tremor from accelerometry would need to be true, and most importantly, the way you measure “stimulation spread” and relate this to DRTT activation would need to be true – none of which is probably the case.
That there was a strong relationship between tremor suppression and DRTT overlap once again highlights the possible importance of the DRTT. I would find the results of this well conducted study far more convincing, if all the exclusion of datapoints, smoothing, normalization, relativization etc. were to be removed and “raw” correlations/regressions with their respective goodness of fit parameters were to be shown – no matter the absolute “slope” of these results.
One additional point:<br /> While you apparently included several random effects in your analysis, which are not reported upon, you do not address stimulation spread / stimulation amplitude as the main confounding factor. Stimulation amplitude alone will<br /> explain a lot of the variance in tremor improvement – and is of course highly correlated to DRTT activation, so that it is difficult to disentangle the two.
Best regards from Cologne!<br /> ~Till Dembek
On 2024-03-25 10:04:24, user S wrote:
Interesting study, however, the authors failed to reference previous studies that externally validated the PCE and Framingham risk models among patients in the UAE. By not referencing such validation studies, the authors missed an opportunity to provide additional context and strengthen the evidence supporting the use of these risk models in their study population. It is important to acknowledge existing literature to ensure the robustness and reliability of the study's findings.
https://doi.org/10.1136/bmj...<br /> https://doi.org/10.1186/s12...
I extend my best wishes to the authors for a successful publication.
On 2024-04-20 08:35:56, user matthieuboisgontier wrote:
This article has been accepted for publication in PTJ: Physical Therapy & Rehabilitation Journal published by Oxford University Press.
On 2024-04-28 03:09:35, user Paul Bladowski wrote:
Gee I hope doctors come up with something quick. <br /> I've been suffering from severe TSW for about 7 years now.
On 2024-04-28 10:09:53, user Menucha Bernstein wrote:
As someone going through TSW this is of unimaginable importance. My gp has never heard of tsw before and so leaves me without any validity of what I'm going through and leaves me without possible remedies. This research would help have an impact by spreading more awareness for doctors and give patients that official diagnosis which is so important.
On 2024-05-07 15:59:05, user Javier Mancilla-Galindo wrote:
Interesting paper aiming to estimate the prevalence of hepatitis B (HBV) and C (HCV) virus infections in the periods before and after the introduction of universal child vaccination against HBV (UCVHB) in 2002.
In the period before UCVHB, the prevalence was 7.7% (109 cases out of 1424 participants), a number higher than that after 2002: 1.9% (36 cases out of 1934 participants). I calculated the crude prevalence ratio using these numbers and by setting the before UCVHB category as the reference, obtaining a PR = 0.24. Likewise, I calculated the OR, obtaining a crude OR = 0.23. The inverse of this OR, which would correspond to the OR with the after UCVHB category as the reference is 4.37. Therefore, I believe there may be a mistake in the odds ratios provided in this manuscript, probably due to a coding error when setting the correct category as the reference. As shown in table 1, the reference category was intended to be before UVC, but the authors seem to have provided the results when setting the category after UVC as the reference.
My overall suggestion to improve the reporting of this study would be to review the complete STROBE statement https://doi.org/10.1371/journal.pmed.0040297 to fully report all recommended items, since some explanations are lacking, particularly for the statistical analyses.
On 2024-06-05 18:47:47, user Zhaolong Adrian Li wrote:
This study is now published as: Li ZA, Ray MK, Gu Y, Barch DM, Hershey T. Weight Indices, Cognition, and Mental Health From Childhood to Early Adolescence. JAMA Pediatr. Published online June 03, 2024. doi:10.1001/jamapediatrics.2024.1379
On 2024-08-22 14:44:00, user Gabriel Baldanzi wrote:
This papers is now published in CHEST https://doi.org/10.1016/j.chest.2023.03.010
On 2024-09-13 18:12:14, user Leandro Hermida wrote:
No one seems to realize this paper has done an incredible and time consuming job of manually curating, harmonizing, and standardizing all the drug names used in therapy in TCGA. GDC didn't harmonize the drug therapy clinical data. This resource makes a lot of research possible. As of Sept 2024 it's still virtually up-to-date when checked against GDC TCGA Data Release v41, I only had to add/fix a few entries! Excellent work Enrico!
On 2024-09-23 09:53:00, user Carlos Carlos wrote:
How did you isolate the effect of vaccination in relation to other preventive measures?<br /> And generally, the countries that carried out the most efficient vaccinations were also those that used other preventive measures most efficiently.
On 2024-10-15 23:16:55, user CDSL JHSPH wrote:
Thank you for sharing your exciting research. Tuberculosis is a serious infectious disease that imposes a heavy burden on the world. Prolonged treatment duration increases adverse drug reactions and reduces patient compliance, which is one of the most challenging aspects of TB drug treatment. Traditional methods have many shortcomings and limitations, so finding new methods that can accurately predict treatment duration is of great significance and critical importance. In this study, you found that model-based methods, especially MCP-Mod (Multiple Comparisons and Modeling), will outperform traditional qualitative methods in determining the optimal duration of antibiotic treatment. This is an exciting study.
However, I have some questions about your study. Your study focuses on the treatment of TB, do you want to extend it to other infectious diseases that also require long-term treatment, such as HIV, HBV, malaria infection? The dataset you used is from DGM, and I am not sure whether these generated patient data have the same heterogeneity as real patient data (do patients in the real world have more complex medical conditions that affect TB treatment?). In addition, in your study, it seems that all patients receive only one treatment regimen by default from the beginning to the end of treatment, but I believe that the situation in the real world may be more complicated. Some patients may change their treatment regimen during treatment for various reasons. Does MCP-Mod or other model-based methods still perform well in such a real environment?
Finally, the idea of finding or creating new methods to accurately predict the duration of treatment is very creative. Looking forward to your new discoveries.
On 2024-11-08 16:21:12, user Kristin Ressel wrote:
Changes to this manuscript were made during the article submission process to the journal Archives of Physical Medicine & Rehabilitation. It is now published and can be found using the citation provided below.
Freburger, J. K., Mormer, E. R., Ressel, K., Zhang, S., Johnson, A. M., Pastva, A. M., Turner, R. L., Coyle, P. C., Bushnell, C. D., Duncan, P. W., & Berkeley, S. B. J. Disparities in Access to, Use of, and Quality of Rehabilitation Following Stroke in the United States: A Scoping Review. Archives of Physical Medicine and Rehabilitation. https://doi.org/10.1016/j.apmr.2024.10.010
On 2024-11-21 21:32:51, user Tommaso Dragani wrote:
Interesting article, like all the others by John that I have had the pleasure of knowing personally. The results of the study are based on mathematical models, which I do not want to question.<br /> However, I would like to suggest to the authors to conduct an epidemiological study on real data to understand the trend of causes of death in the years 2021-2023, years in which there is an excess of mortality in Western countries that is not easily explained.<br /> It would be very interesting to carry out a large study, of the case-control type, comparing the mortality risk of vaccinated and unvaccinated people.
On 2025-02-25 16:04:51, user Peter C Gøtzsche wrote:
You estimate that from 1.4 to 4.0 million lives were saved with the covid vaccines.
I do not remember having seen a paper with so many assumptions before. You do sensitivity analyses but I wonder what the reliability of this study is.
You assume, for example, that absent vaccination, the whole world would have got infected with the Omicron variant, and that vaccines reduce mortality by 75%.
Since the virus mutates all the time, and since there were too sparse data on mortality in the randomised trials to tell us if the vaccines reduce mortality, I am sceptical towards any estimation of what a possible effect on mortality would be, and I would expect this to be a rapidly moving target, too. We have seen how rapidly the protective effect against infection dropped after the trials were done, down to about 50%. Most people I have talked to, including myself and my wife, got covid despite being fully vaccinated, but of course, we might have died if we had not been vaccinated.
On p29, you write: Vaccine effectiveness for death: We assumed VE=75% during the pre-Omicron period and 50% during the Omicron period.
Where did these estimates come from? I wonder if it is possible to say anything about an effect on mortality.
On 2024-12-01 14:50:07, user xPeer wrote:
Courtesy review from xpeerd.com
Summary
The preprint titled "Financial incentives to motivate treatment for hepatitis C with direct acting antivirals among Australian adults" investigates how financial incentives influence the initiation of direct-acting antiviral (DAA) therapy for untreated hepatitis C virus patients in Australia. Utilizing Bayesian adaptive design, the study assigns participants varying levels of financial incentives to observe which incentive levels effectively promote treatment initiation. The study is thorough in detailing statistical methods, including primary and secondary analysis plans, making it potentially influential for public health policy.
Major Revisions
Bias and Confounding Variables: While the study employs Bayesian adaptive design and randomization, there is insufficient discussion on potential biases and confounding variables that could affect the study's results, such as differences in demographic variables, healthcare access, or socioeconomic status (Page 9, Study Design).
Data Accessibility:
Availability of Data for Replication: The document should explicitly state how and where the data will be made available for replication purposes, adhering to good scientific practices (Page 12, Data Availability Statement).
Outcome Measures and Analysis:
Recommendations
Minor Revisions
Replace "payment amounts are made" with "payments are made" (Page 2, Abstract).
Formatting Issues:
Standardize the presentation format of equations and mathematical notations to enhance readability (Page 6, Effect of co-incentives).
AI-Generated Content Analysis:
On 2024-12-03 10:06:13, user Ssekitoleko Twaha wrote:
Wonderful! This is a good study, it shows Mbarara has the majority of the victims which calls for further studies "Why Mbarara?"
On 2024-12-04 12:43:08, user Hanna Dellago wrote:
This article has been published in a peer-reviewed journal: https://www.dovepress.com/decongestant-effect-of-coldamaris-akut-a-carrageenan--and-sorbitol-con-peer-reviewed-fulltext-article-IJGM <br /> The article has been indexed on Pubmed: https://pubmed.ncbi.nlm.nih.gov/39534593/
On 2025-01-03 13:38:21, user Ekkehard hewer wrote:
The final version of our article after peer-review is now published in the Journal of Clinical Pathology (J Clin Pathol. 2024 Dec 9:jcp-2024-209695. doi: 10.1136/jcp-2024-209695.)
On 2025-01-10 10:40:00, user Miles Markus wrote:
These interesting results suggest that because of its mode of action, primaquine could be inactivating a proportion of the asexual parasites that are hidden in the spleen and bone marrow. See: https://doi.org/10.3390/tropicalmed8050278
On 2025-01-21 00:01:42, user Alan Olan wrote:
The article "Discovery of Breast Cancer and Autism Causes. Method of combining multiple researches to find non-infectious disease causes” has been published in the Journal Of Nursing and Healthcare ( 2024, volume 9, issue 3 ) and available at the link below:<br /> https://www.opastpublishers.com/peer-review/discovery-of-breast-cancer-and-autism-causesbrrnmethod-of-combining-multiple-researches-to-determine-noninfectious-disea-7819.html
On 2025-02-03 12:06:39, user Prop Joe wrote:
The ability to control for genetic risk in observational studies using PGS is often quite poor as PGS tend to be very noisy on an individual level, it would be more effective to use something like PENGUIN ( https://www.pnas.org/doi/epub/10.1073/pnas.2408715121 ) or structural equation modelling
On 2025-02-12 19:59:20, user Aron Troen wrote:
Review Part II
Methodological shortcomings<br /> Study population and period: The population demographics used as the denominator of per capita caloric requirement rely on census data from 2017 and UN OCHA reports on movement and displacement of the population between Gaza governates during the war. The study states that no adjustments were made for out-migration or excess deaths. However, approximately 150,000 people left the Gaza strip from the beginning of the war until the Rafah crossing was closed in May. When added to casualties and a natural death rate of ~5500 people per year, this means that the population denominator used to calculate the food supply in Kcal per person-day (Figure 4) was overestimated by ~ 200,000 people, which would result in the underestimation of the food supply by approximately 10%.
The authors acknowledge the limitation that “There remains considerable uncertainty about our population denominators in the north, and even moderate error in these would have affected our Kcal per capita estimates. Gaza’s population has probably decreased due to high mortality and out-migration…”. Nevertheless, they shrug off this limitation by asserting that “…we expect this to have only marginally affected our estimates.” without explaining why.
Data on truck deliveries
The comparison between UN and Israeli shipping data is superficial and inadequate for supporting the decision to dismiss and exclude the data from the analysis. The authors fail to discuss the literature, of which they surely must be aware, which addresses the high-profile controversy over the number of trucks supplying aid to Gaza and the discrepancies between the UN and COGAT data, and which notes the under-reporting of private sector food shipments by the UN (see for example, Rosen, Bruce and Nitzan, Dorit, Humanitarian Food Aid for Gaza: Making Sense of Recent Data (June 02, 2024). Available at http://dx.doi.org/10.2139/ssrn.4851635) "http://dx.doi.org/10.2139/ssrn.4851635)") .
Although the authors note the "large discrepancy between UN and Israeli government data" on the entrance of goods into Gaza, they erroneously assert that UNRWA monitored the composition of “ALL trucks” crossing into Gaza, despite the partial coverage of non-UN food consignments, and despite disclaimers published by UNRWA and recorded by the authors, that the data from May-August are incomplete. The authors make little effort to help the reader understand the reason for the discrepancy nor to explain how they reached the conclusion that UNRWA's dataset "appeared highly complete and well-curated, but may be biased by systematic under- or over-reporting unknown to us". Instead of making a serious effort to include COGAT data to improve the accuracy of their simulation, they perform a perfunctory comparison of the UN and COGAT data and justify the summary dismissal of the Israeli registry, using the categorical listing of truck weight registered by COGAT as “evidence of digit heaping or crude approximation”. This is a peculiar choice, given the importance of the COGAT dataset, which is included in the June IPC report and in a working paper that the authors cite that analyzes the caloric content of food supplied to Gaza, including private sector shipments that are missing from the UN data (now published at https://ijhpr.biomedcentral.com/articles/10.1186/s13584-025-00668-6) "https://ijhpr.biomedcentral.com/articles/10.1186/s13584-025-00668-6)") . An alternative choice might have been to simulate the weight and contents of the COGAT data like the authors did for incomplete WFP data, or to perform a sensitivity analysis and compare how caloric supply estimates might differ based on the data and assumptions used.
Instead, the study implies that the discrepancy has to do more with weight of aid reported rather than the number of trucks. However, significant gaps are also evident in the number of trucks reported. For example, in February, UNRWA reported 1,857 trucks carrying food while COGAT's figure is 15% higher (2,117). In January the gap is equally large, with COGAT's number of trucks 13% higher than UNRWA's (3,364 and 2,990 respectively). According to COGAT, between January and May 2024, "as a result of the UN’s partial counting… there are 3,406 trucks missing from their Kerem Shalom data and 2,198 trucks missing from their Nitzana/Rafah data." ( https://govextra.gov.il/media/dtmhzmtn/discrepancies-in-un-aid-to-gaza-data-2.pdf) "https://govextra.gov.il/media/dtmhzmtn/discrepancies-in-un-aid-to-gaza-data-2.pdf)") . Furthermore, the period analyzed covers several unexplained changes in UNRWA's dashboard ( https://honestreporting.com/how-unrwa-covers-up-its-faulty-gaza-food-data/) "https://honestreporting.com/how-unrwa-covers-up-its-faulty-gaza-food-data/)") , apparently following data-driven criticism about its methodology and lack of transparency on social media ( https://x.com/AviBittMD/status/1780052840930578499) "https://x.com/AviBittMD/status/1780052840930578499)") . According to a FEWS NET report, "on September 8… UNRWA’s dashboard was updated with additional supply data for August, as well as for previous months, including commercial truck entries as reported to UNRWA." UNRWA has not disclosed where the new data on commercial trucks came from or how far back the data update had gone.
The subsequent calculation of caloric availability includes a mix of registered and simulated data, in which the simulation parameters extremely underestimate the caloric supply. The model derives the simulated distribution of estimated Kcal per truck as described in the methods and shown in supplementary figure A1: “We reconstructed the number of these trucks over time based on published information and data shared by WFP . As no data on content were available, we simulated their caloric equivalent by repeatedly sampling from the empirical distribution of calories per truck obtained from the UNRWA dataset.“ There are several problems with this approach. First, it is unclear which specific truck data “shared by WFP” were used for this simulation, and whether they are publicly available. This should be clearly indicated in the uploaded github data files. Moreover, the WFP records the contents of their shipments. Why were their contents omitted in this case? Presenting summary tables in the article would help the orient the reader to the source data for the truck counts used, distinguishing between simulated or assumed and actual contents. An implicit assumption underlying the simulation of WFP contents according to estimated distribution of calories by UNRWA trucks, is that the contents of UNRWA and WFP shipments are the same. This needs to be documented or the assumption should be made explicit. Given that the study appears to significantly underestimate the weight of the UNRWA pallets, the procedure used would be expected to propagate biased estimates lower than the actual weights to the WFP data as well.
The most critical problem in the model is with the ASSUMED weights that the authors assign to the consignments. They assume mean pallet weights to be 637.5 kg per pallet, with a minimum to maximum weight of 510-765 kg per pallet (gaza_food_data.xlsx, general tab), based on citations 23, 30 and 31. Citation 23 does not provide supporting data and refers to IPC reports in general. Citations 30 and 31 are standard operating procedures for the Egyptian Red Crescent (ERC) from October and November 2023, which REQUIRE an 18% higher palletization weight of 750 kg. However, even this value is considerably lower than UN aid REQUIREMENTS that specify pallet weights for wheat flour (1125-1200 kg/pallet), sugar (1200 kg/pallet), chickpeas 1200 kg/pallet), red lentils (1200 kg/pallet), rice (1200 kg/pallet), SF oil (910-1213 kg/pallet) or milk (655 kg/pallet) (UNRWA Special Shipping Instructions for Shipments by Sea Air and Land – April 2024 - page 6; https://unrwa.org/sites/default/files/emergency_gaza_2023-_rfq-pskh-42-24-the_provision_of_man_trucks_for_gfo-tender_doc.pdf) "https://unrwa.org/sites/default/files/emergency_gaza_2023-_rfq-pskh-42-24-the_provision_of_man_trucks_for_gfo-tender_doc.pdf)") . Examination of “dataset 20240911_Commodities Received.xlsx” reveals that consignments attributed to ERC alone or with other agencies (including UNRWA) account for only 90,009 of the total of 531,175 food line items (17%) and 8085 of the total of 22,833 mixed line items (35%). Therefore, even if the mean value of 637.5 kg/pallet were correct for the ERC-associated consignments, the weights assigned to the foods supplied are unreasonably low, giving an extreme underestimation of the calories supplied.
This unreasonably low distribution of the estimated Kcal per truck can be seen in the simulated truck weights. The histogram in Appendix figure A1 shows a distribution that is heavily skewed to the left with the vast majority of trucks carrying less than 50 Million Kcal and perhaps a third carrying less than 25 Million Kcal. The simulated lower end of the distribution, which begins with 600 trucks carrying zero Kcal/truck, is highly unlikely to be accurate. Even if one takes the assumed mean weight per truck assigned by the researchers as 14,500 kg, multiplying by the calorie content of wheat flour (3,640 Kcal/kg) would give a mean calorie content per truck of 52.8 M Kcal. Even if a lower calorie food calorie density of circa 3200 Kcal/kg were used, based on visual inspection of Figure 3A (Kcal/kg food consignments between Oct 21 2023 – May 4 2024), the assumed mean caloric content of the food trucks should be 46.4 Million Kcal. These values, are hard to reconcile with the histogram, even if the assumed and simulated truck weights in the model are true. Thus, the validity of the model assumptions and their potential for propagating error and uncertainty in the results should be carefully revisited.
Data on other food sources
Estimates of the available existing food supply before the war combine the household stocks of humanitarian food aid, data provided to the researchers by UNRWA giving the exact stocks in UNRWA warehouses and the range of minimum-maximum capacity of WFP warehouses before the war; estimates of existing private stores, and of agriculture and livestock production, discounted for gradual depletion and destruction during the war’s early months. The model does not account for potential Hamas stockpiles ( https://www.nytimes.com/2023/10/27/world/middleeast/palestine-gazans-hamas-food.html) "https://www.nytimes.com/2023/10/27/world/middleeast/palestine-gazans-hamas-food.html)") .
The spreadsheet “gaza_food_data.xlsx” tab “warehouses” lists total UNRWA and WFP warehouse capacity before the war as a range with a minimum to maximum capacity of 7,900-21,479 MT or 28.7 – 78.1 billion Kcal, whereas presumably, the “exact” contents of the food in UNRWA warehouses are those data listing a total of 38.3 billion Kcal of food in tab “unrwa_stocks”. No further information is provided to ascertain that the data given to the researchers by UNRWA and WFP is complete and accurate.
Existing private stores/Caloric balance and consumption: The text describes the assumptions used in estimating the existing stores and their depletion during the war. The text defines model parameters (eg. I0, I0,m, etc.) but does not spell out the full model equation. Doing so would help the readers better understand the explicit logic of the simulation. <br /> The model discounts agriculture and livestock production using estimates of the rate and extent of damage to agricultural infrastructure citing UNOSTAT remote sensing data published by FAO (references 11, 40-42). The validity of estimates derived from image analysis depends heavily on the control conditions selected for a reference and on the quality of validation and calibration in the field. The percent damage arrived at by automated image analysis algorithms, depends on the selected reference conditions, whose rationale and validity are not given. Field validation is impossible in a war zone which is why the cited reports carry important disclaimers such as: ”This assessment has been conducted based on available satellite imagery, ancillary data and remote sensing analysis for the period 7 October - 31 December 2023 without field validation. Land cover data from 2021 was used as baseline data due to limited availability for data collection in the area of interest and time constraints related to the nature of the report.“ ( https://openknowledge.fao.org/server/api/core/bitstreams/f2ad2f59-0c29-472e-978b-54cef347c642/content) "https://openknowledge.fao.org/server/api/core/bitstreams/f2ad2f59-0c29-472e-978b-54cef347c642/content)") . The limitations of these estimates used in the model should be acknowledged.
Estimating Baseline and Recommended per-capita caloric intake
The per-capita caloric intake for emergency-affected populations is given by the WHO guide and is stratified by age and sex. Given the age and sex distribution of the population of Gaza (gaza_food_data.xlsx, tab prop_age_sex), the mean daily per capita calorie requirement for the population is 2,065 Kcal/person-day. This threshold shown in yellow in Figure 4, is the appropriate criterion for evaluating the adequacy of the food supplied by the humanitarian food cluster. <br /> However, the researchers go beyond this consensus humanitarian requirement, and derive a much higher Gaza-specific estimate “I0“ for the population intake at baseline. The baseline value of I0 appears to be just under 2,800 Kcal per person-day according to figure 4 (blue line value on October 7th, 2023). The paper does not give the baseline value “I0“ explicitly. However, it is nearly identical to the weighted average caloric intake (2,837 Kcal/person-day) observed in a population of obese older Gazan adults (mean age 57, weighted mean BMI 31.4) with a high prevalence of noncommunicable diseases, in a survey conducted during the COVID pandemic between March and July 2020, which was used to impute the daily intake of the overall population. The weighted intake and BMI may be calculated based on the data provided in the gaza_food_data.xlsx spreadsheet, tab prop_age_sex. The estimated pre-war intake, is roughly 33% higher than the humanitarian requirement, or “recommended daily intake”. The model derives the weekly available per person food supply, by subtracting this pre-war intake estimate, from the estimated weekly available daily per-capita food supply (from the sum of private stores and warehouses, agriculture and delivered food-aid, discounted for reported consumption and damage). The model makes the questionable assumption that the emergency-affected population would continue to consume the same amount of food that it did during the war, as it did before the war. Even before examining the validity of the method used to derive “I0“, this assumption forces the model to deplete the available food supply significantly more rapidly (about 33% sooner) than if the recommended humanitarian food requirement were used to simulate the adequacy of the available food supply.
The logic behind the method of imputation to the whole population is not clearly explained (“we sampled random values from each age-sex stratum distribution…” Appendix A, Figure A2). <br /> Supplementary figure A2, entitled “Baseline adult caloric intake” shows simulated untransformed and log-transformed, age and sex specific distributions of energy intake, from Abu Hamad et al., J Hum Hypertens 2023. That reference describes a health survey conducted in Gaza between March and July 2020 among adults aged 40 and older, and using the semi-quantitative Food Frequency Questionnaire for Palestinian Populations which was developed by Hamdan et al., in a population of Palestinian women in Hebron, and published in Public Health Nutrition 17(11) in 2013. While such survey tools may be useful for epidemiological studies, they are intended to classify populations into categories of relative nutritional intake, rather than for deriving valid absolute individual nutrient intakes. In the case of the specific instrument used, Hamdan et al. write that studies like theirs “can be considered a calibration and correlation rather than a validation procedure”. The correlation that they obtained in that study between three repeat 24 hr food recall questionnaires and the semi quantitative FFQ was 0.601 and was not statistically significant (in other words the FFQ gives a similar but poorly concordant result to the reference standard). Moreover, it is doubtful, if the high average food intake of an obese, older and unhealthy population (which was obtained during a health crisis that increased sedentary behavior due to social distancing and isolation), provides a sound basis for imputing routine intakes for a population that is predominantly younger (82% of Gaza’s population are below age 40 – see gaza_food_data.xlsx, tab prop_age_sex), healthier, and not affected by a pandemic. It would be helpful if the researchers clarified these limitations and presented the age-and sex stratified per-person daily caloric derived intake and compared it with the consensus humanitarian requirements.
On 2025-02-21 05:09:57, user Evan Stanbury wrote:
Re "Serological evidence of recent Epstein-Barr virus (EBV) reactivation was observed more frequently in PVS participants". EBV causes glandular fever, which often leaves sufferers with a Post-Viral syndrome similar to Long COVID (and the sick cohort). This is not directly attributable to the vaccine.
On 2025-02-23 14:32:17, user Shin jie Yong wrote:
"Recently, a subset of non-classical<br /> monocytes has been shown to harbor S protein in patients with PVS [18]" - reference #18 cited Patterson et al. (2020), which might be an error since this study examined long covid participants only. My apologies if I'm mistaken, however.
On 2025-03-13 02:23:30, user Lengyel wrote:
This article has been published<br /> PMID: 39753552<br /> PMCID: PMC11698969 <br /> DOI: 10.1038/s41467-024-55440-2
On 2025-03-15 20:05:42, user Josef wrote:
Suggesting that the Iranian government’s COVID-19 data was “engineered” is an overreaching claim that is insufficiently supported by robust statistical diagnostics, leaving a gaping void between speculation and scientifically substantiated evidence.
On 2025-03-17 22:11:06, user Dr.PayamVaraee wrote:
Critical Review: "The Threat of Populism to Science and Global Public Health: Lessons from Iran"<br /> A. Critique of Content and Main Claims<br /> 1. Claim: Populist Science Increased Mortality in Iran<br /> The article asserts that populist policies delayed vaccination efforts in Iran, leading to excess mortality. However, data comparisons with countries such as the US, UK, and Germany reveal similar trends, challenging the uniqueness of Iran’s case.
Issues:<br /> Overlooking Key Variables: The analysis does not account for factors such as economic sanctions, healthcare infrastructure, and demographic differences.<br /> Post-Vaccination Mortality Decline: The significant drop in mortality following mass vaccination aligns with global patterns, suggesting that other factors played a role beyond populist decision-making.<br /> Flawed Comparisons: The article contrasts Iran with Bahrain and the UAE, despite major differences in population size, vaccine availability, and healthcare systems.<br /> 2. Claim: Iranian Data on COVID-19 Mortality is Unreliable<br /> The article utilizes the Prophet model to argue that Iranian mortality statistics were manipulated.
Issues:<br /> Limitations of the Prophet Model: Originally designed for economic and social trend forecasting, this model is not optimized for analyzing health crises.<br /> Weak Evidence for Data Manipulation: The assumption that discrepancies between projections and reported data indicate fraud is flawed. Factors such as improved treatment strategies and emerging herd immunity are not considered.<br /> Selective Application: The same predictive model is not used to assess data accuracy in other countries, raising concerns about bias.<br /> B. Critique of Data Analysis Methods<br /> 1. Misuse of ANOVA<br /> The article employs a one-way ANOVA to compare vaccination delays across countries. However, this method does not sufficiently account for assumptions of normality and homogeneity of variance, potentially leading to misleading conclusions.
Better Alternatives:<br /> Time-Series Models (ARIMA, VAR): These would provide a more accurate assessment of trends over time.<br /> Multivariate Regression: This method would allow for the inclusion of additional variables influencing vaccination delays and mortality rates.<br /> 2. Absence of Confounding Variable Control<br /> The article does not adjust for important factors such as:
The proportion of elderly populations.<br /> Hospitalization rates and healthcare capacity.<br /> Lockdown policies and mobility restrictions.<br /> Neglecting these variables weakens the argument that Iran’s excess mortality was driven primarily by populist policies.
C. Logical and Argumentative Flaws<br /> 1. Selective Data Use<br /> The article emphasizes evidence that supports its argument while disregarding counterexamples—such as similar mortality patterns in Western countries—leading to confirmation bias.
Correlation vs. Causation Fallacy<br /> It assumes a direct causal link between delayed vaccinations and excess mortality without considering other influencing factors, such as economic restrictions, healthcare efficiency, and prior infection rates.
Oversimplification of a Complex Issue<br /> By attributing Iran’s COVID-19 response largely to populism, the article overlooks the fact that mortality spikes occurred in Germany, the US, and other non-populist-led countries. A more nuanced analysis is needed.
D. Broader Issues with the Scope of the Article<br /> 1. Disproportionate Focus on Iran<br /> If populist science is a global issue, why is Iran the only case study? A comparative approach—including countries like the US, Brazil, and Poland—would strengthen the argument.
Lack of Practical Solutions<br /> The article critiques Iran’s handling of the pandemic but does not propose strategies to combat misinformation and improve public health responses globally.
Limited and Selective Data Sources<br /> The article relies heavily on The Economist and WHO while neglecting independent organizations such as the CDC and regional research institutions. A broader range of data sources would improve credibility.
E. Additional Criticism of the Core Argument<br /> 1. Populism Beyond Iran<br /> Research, including the PANCOPOP study, shows that right-wing populism influenced pandemic responses in the US, Brazil, Poland, and Serbia. The article’s exclusive focus on Iran suggests political bias rather than an objective analysis of populism in global public health.
Contradictions in the Populism Model<br /> The article argues that Iran exhibited both the denialist model (seen in the US and Brazil) and the authoritarian control model (similar to Poland and Serbia). These models, however, are distinct and mutually exclusive in the PANCOPOP framework, making this assertion contradictory.
Absence of Comparative Analysis<br /> The study lacks a global perspective on how different forms of populism shaped pandemic policies, weakening its claim that Iran’s case is uniquely alarming.
Misattribution of Vaccination Delays Solely to Populism<br /> The article ignores other major contributing factors, such as:
Economic Sanctions: Restrictions on vaccine imports.<br /> Vaccine Hesitancy: Public resistance to certain vaccines.<br /> Domestic Vaccine Development: Initial reliance on homegrown vaccines before shifting to imports.<br /> By overlooking these aspects, the article oversimplifies the reasons behind Iran’s vaccination timeline.
Failure to Address Global Media Influence<br /> Studies have demonstrated that misinformation on COVID-19 spread across multiple countries, yet the article singles out Iran without discussing similar issues in other regions.
Statistical Flaws<br /> The ANOVA and Prophet model are misapplied, limiting the validity of conclusions.<br /> A lack of multivariate regression fails to control for external factors influencing pandemic outcomes.
Conclusion<br /> The article presents a flawed and unbalanced analysis of how populism influenced Iran’s COVID-19 response.
Key Weaknesses:<br /> Selective use of data that aligns with the author's argument while ignoring broader trends.<br /> Lack of comparative analysis, failing to place Iran’s case within a global context.<br /> Misuse of statistical methods, leading to questionable conclusions.<br /> Recommendations for a Stronger Study:<br /> A multi-country analysis incorporating nations with varying political ideologies.<br /> Consideration of alternative explanations for mortality trends, such as healthcare infrastructure and economic factors.<br /> A transparent and methodologically sound approach to data interpretation.<br /> A truly robust and objective study would examine multiple countries, account for confounding variables, and avoid overgeneralizing populism’s impact on public health outcomes.
On 2025-03-26 17:07:23, user Paul wrote:
This article has been published in BMJ Global Health <br /> https://doi.org/10.1136/bmjgh-2024-016607
On 2025-03-27 14:39:57, user DrZafar IqbalPhD wrote:
The peer-reviewed version of this article is available at https://www.jptcp.com/index.php/jptcp/article/view/7959 .
On 2025-03-31 17:53:49, user Magalhaes Borges, Vini wrote:
The final version of this paper has been published on the JAHA webpage: https://www.ahajournals.org/doi/full/10.1161/JAHA.124.036193
On 2025-04-08 22:57:10, user Isada’s Biggest-Fan wrote:
It is very irresponsible that Cleveland Clinic ID made this available without peer review. There are multiple design flaws affecting the integrity of the study.
If the vaccine is mandated (all participants were CCF health workers) and we stop following patients who have been “terminated” it is very possible those who are “unvaccinated” and less likely to have the event because they were terminated/censored.
We are given little to no detail as to why some workers are getting vaccinated later than others (are those workers that get vaccinated more clinical and therefore have a higher exposure).
The discussion starts with “This study found a significantly higher risk of influenza among the vaccinated compared to the unvaccinated state in northern Ohio during the 2024-2025 influenza season.”
On 2025-04-19 16:15:06, user Jan Stratil wrote:
I find it quite Strange that neither Point estimate Not confidence Intervall Changes in any way after including two statistically significant variables (Sex, Occupation). Can it truely be that they are Not associated at all with the decision to get the vaccine?
Typo?
On 2025-04-10 01:40:58, user Andrew Webb wrote:
Now published: https://link.springer.com/article/10.1007/s12028-025-02243-y
On 2025-04-16 20:15:51, user David Gorla wrote:
This is a very interesting article and we thank the authors for this pre-print. There is no discussion about the relevance of dengue especially in Latin America. Living here, I would be delighted to see evidences of a method to prevent dengue outbreaks, that unfortunately is badly hitting this region during the last 20 years without control. I followed with much interest the publications of the WMP and I have to say I have too many doubts to agree with the authors interpretations, with the past articles and the present one. When dengue incidence is included, the main weakness of all WMP articles (except the Yogyakarta one, and that would be another discussion) is that the study designs used in the publications do not allow to make solid interpretations of the results. All of them rely on comparisons of the temporal variation in dengue incidence, that we know is very difficult to explain either in time or in space, and make space for too many interpretations for and against the presented results.
In the present case, the Supplementary Figure S3 Annual dengue incidence in Niterói between 2007 and 2024 (18 months of data) is quite remarkable. Leaving aside incidence between 2019 and 2023 (5 years), identified as “city - wide Wolbachia deployment”, leaves 13 years of data. Dengue incidence in 2024 (after Wolbachia release) is lower than incidence in 6 previous years (2007, 2008, 2011, 2013, 2014 and 2016) and higher than incidence in the other 6 previous years of the period. So, one could say that 2024 was an average year for dengue incidence between 2007 and 2018, period without any Wolbachia influence. Additionally, it is at least an order of magnitude higher than the incidence during the Wolbachia deployment period.
Authors argue that this case is in line with the results of the Colombian case they published some time ago. In the Colombian case data on Wolbachia infection shows a wild variability never quite reaching the magical 60% infection level that should be reached to sustain the introgression (discussion on this case is being prepared to be published elsewhere, although you can have a look at https://davidgorla.substack.com/p/is-dengue-really-controlled-using) "https://davidgorla.substack.com/p/is-dengue-really-controlled-using)") .
Anders et al recognize the incomplete blocking of dengue transmission by Wolbachia infected Aedes aegypti (page 9). It was shown that this is especially the case with DEN1 (one of the most frequent strains affecting Latin America). Adding to that is the evidence (shown on a number of published articles) on loss of Wolbachia infection because of high temperature during summer months, etc.
Wrapping up, the release of Wolbitos show loss of infection during summer months or complete loss of infection, incomplete virus blocking (especially DEN1), and do not show a convincing impact on dengue incidence between treated and untreated areas.
So, if I were to review this article I would reject it based on the lack of data supporting the claims the authors make. And I would also suggest to health authorities of any country considering this dengue control technique to ask for much clearer evidences not only on the results of the WMP claims, but also ask for evidences that the released Wolbitos are not worsening dengue transmission.
On 2025-04-22 19:17:14, user chalchew wrote:
Over the past ten years, I have worked on arboviral diseases. The Ethiopian Public Health Institute regularly collects samples from sentinel sites and performs a triplex test (DENV, CHIK, & ZIKV). Most PCR results have been negative for CHIK and ZIKV. Therefore, it would be prudent to verify the kit performance before disseminating findings to the larger group, especially since my follow-up in the region indicates no clinical manifestation of these viruses despite a reported prevalence of 38% for ZIKV. Consequently, the government should consider implementing corrective measures, including health education.
On 2025-04-28 08:39:14, user Tom Hähnel wrote:
Published version available under:<br /> https://www.nature.com/articles/s41531-025-00923-2 <br /> DOI: 10.1038/s41531-025-00923-2
On 2025-04-28 10:51:23, user Hazel A Smith wrote:
This immunisation programme is also delivered in Children’s Health Ireland (CHI) Temple Street and Crumlin. I appreciate that many of these neonates will be high risk.
With hospitalization, these would be ward, NICU (CHI at Crumlin) and PICUs combined? Where (setting within hospital) was the effect seen? As the overall lower rate may, or may not, mask a continued high RSV admission rates in CHI at Crumlin's NICU and PICUs. Also, the NICU in Crumlin is not that long established so previously all ICU admissions were to PICU.
I am struggling to read the figures (but this is my limitation with stats and I cant expand the images which is just how the preview is displayed) for age specific admissions so I can't tell which age group benefited the most. We know that the younger the infant the more likely it is to be not only a ward admission but a PICU admission.
Can you look at duration of hospitalization? As it could be that even if admitted you stayed for a shorter duration. Also, could be that if admitted it was to the ward and not PICU as would have previously happened (especially for those two months old or younger)?
With the data HSE has to hand, are there any concerns about how neonates/ infants of mothers in Northern Ireland (so vaccine is given in pregnancy) but are cared for in a RoI hospital will be managed? This could be that numbers are so small that there is no effect (and the time it would take to clean is not of value).
If 532 RSV hospitalisations were averted than how many operations etc were not cancelled (compared to previous years) due to reduced pressure on beds or did the flu figures for the last winter replace the 532 hospitalisations?
On 2025-04-29 13:25:49, user Hanna wrote:
Link to the published chapter: https://doi.org/10.1016/bs.pbr.2024.12.001
On 2025-04-30 18:19:45, user James Pirruccello wrote:
This was published in 2024 in the EHJ at https://academic.oup.com/eurheartj/article-abstract/45/40/4318/7731683
On 2025-05-15 16:07:41, user Autoimmune neurology wrote:
This article was published in Frontiers Immunology https://www.frontiersin.org/journals/immunology/articles/10.3389/fimmu.2025.1543781/full
On 2025-05-19 13:07:08, user Jspr Saris wrote:
I am pleasantly surprised by the work described, its methods and results. <br /> At base I don't have strong ideas on these. However, I do wonder whether age and or severity should be included in the clinical follow-up.
Regarding the discussion; I am missing the point above in the discussion (i.e. line 431 & 440), although line 444 hints at it slightly.<br /> In line 453 I am missing a reference or the like, e.g. the Dutch VKGL community variant sharing database.
Regarding line 642 and further, work done by the twin study center in Amsterdam could be of help, specifically the work of J. van Dongen on the epigenetic mark found in mono-zygotic twins with known age. This mark diminishes with age, but is present and detectable up to 55 y.
A suggestion for further work would be to compare the synthetic dataset to age versus severity -ratio or weighting.
On 2025-05-28 15:43:41, user Peter Fino wrote:
This paper has now been published in the Journal of Neurotrauma with a slightly revised title. The citation for the peer-reviewed publication is below.
Fino PC, Antonellis P, Parrington L, Weightman MM, Dibble LE, Lester ME, Hoppes CW, King LA. Objective Turning Measures Improve Diagnostic Accuracy and Relate to Simulated Real-World Mobility/Combat Readiness in Chronic Mild Traumatic Brain Injury. J Neurotrauma. 2025 Mar 26. doi: 10.1089/neu.2024.0127. Epub ahead of print. PMID: 40135290.
On 2025-06-04 07:18:29, user Proud PhD supervisor wrote:
Huge congratulations to Julien Paris for making both the analytical and genotype datasets openly available to the community. It’s a fantastic step toward promoting transparency and open science, all while meeting the highest standards of data protection under the CEPD and EDPB G29 guidelines. Initiatives like this really help drive collaborative research forward. https://github.com/jp3142/OFSEP_HD_public_avatar_dataset
On 2025-06-09 09:56:48, user Chris Kirk wrote:
Were these analyses completed using the absolute data or the relative data from each study? Using the relative data would mean that the larger body mass and the taller stature of the transwomen (male) athletes would be a confound that would invalidate the results. Please discuss.
On 2025-06-11 02:25:36, user Stephen Jones wrote:
Please be advised that this preprint is now accepted as of June 10, 2025 for publication in Imaging Neuroscience as article number IMAG-25-0074R1 and has been “put into production for copyediting and ‘ahead-of-publication’ posting” and will be appearing online in 2 or 3 weeks. <br /> BR, Stephen C. Jones
On 2025-06-17 22:16:28, user Mamadu Baldeh wrote:
This article has been peer-reviewed and published: https://doi.org/10.1371/journal.pone.0324064
On 2025-07-02 03:29:32, user David wrote:
There appears to be a considerably greater gain in both body weight and total body fat from months 2-6 in the feijoa group vs control group.This seems important given the participants are no longer on controlled diets . Is it possible to include p values for these differences to see if they are significant, or bordering on significant . Both parameters are tending in the same direction & given these were the primary outcome , would it be worth including in the discussion possible reasons why these increased compared to the placebo .Perhaps the feijoa contained more calories than the placebo ?
On 2025-07-08 07:44:33, user peiyuan zhao wrote:
Really impressive work — I learned a lot from it !<br /> I'm especially excited about how dense, multimodal digital trace data can open up new perspectives, theories, and methodological innovations in understanding human behavior and habits.<br /> All the best as the project develops and expands.
On 2025-07-09 03:42:51, user Alejandro Arbona wrote:
This preprint has been published https://doi.org/10.1016/j.jad.2024.12.030