On 2021-12-26 08:33:43, user Jack Bean wrote:
"asymptomatic transmission" Really. No evidence for asymptomatic transmission. https://www.nature.com/arti...
On 2021-12-26 08:33:43, user Jack Bean wrote:
"asymptomatic transmission" Really. No evidence for asymptomatic transmission. https://www.nature.com/arti...
On 2021-12-29 01:23:50, user lowell2 wrote:
The negative estimates in the final period arguably suggest different behaviour and/or exposure patterns in the vaccinated and unvaccinated cohorts causing underestimation of the VE. --uh, evidence that suddenly at 91 days people started behaving differently than they did the previous 90 days? this conclusion in the discussion has no substantiation whatsoever. Maybe the vaccine just didn't work after that regardless of what people did or didn't do.
On 2021-12-31 02:47:03, user Robert Crombie wrote:
Is the same true for Astrazeneca ?<br /> If Astrazeneca is less protective, why aren't the authorities warning those that have been given it ?
On 2021-12-31 07:01:37, user Georg Neeby wrote:
Hi, I was surprise to see your paper, it was mentioned in a conspiracy blog, https://childrenshealthdefe...
ITs being used as a reason to not get vaccinated, and for children specifically to not get vaccinated. I took a look at your data, and there is nothing to your paper. Its all just noise, nothing would be reproducible. Your claims massively overstate your data, which basically shows no difference, and you havent controlled for batch effects. Given how this information is being used by nefarious players, you should repeat the entire study with adequate power, I bet none of your new data will show the same trends as your old data.
On 2022-01-02 21:25:37, user madmathemagician wrote:
In its 4th revision, part of the title changes from an unsubstantiated suggestive question "Are COVID-19 data reliable?" which the article fails to answer, to "Applying Benford’s law to COVID-19 data: ...". The article mentions the conditions of applicability of Benford’s law, but does not test these assumptions on the source data, and "compensates" by making weak but suggestive claims, and recommending further research.
I'd recommend the author not doing any further research.
On 2022-01-02 22:34:18, user madmathemagician wrote:
The article does not discuss substantial differences in reporting by country in the used data set: some countries are only reporting every week, or not in weekends.
I guess the author could add another "disclaimer" about this issue, and creating a new revision. Perhaps even discussing how it would affect the fitness of a Benford's distribution.
However even when this issue is properly addressed, this article is about applying non-applicable statistics, concluding nothing, but making weak but suggestive claims.
On 2022-01-04 00:38:33, user Leicesterboy wrote:
This study looks at 29,000 Omicron patients, but the authors seem unable to find the courage to look at the statistics for “risk of death” independently, preferring to conflate it with other outcomes, like ICU or hospitalizations. Why? Isn’t that the outcome most of us are interested in and fear? By the way, I’d also be keen to see people be more definitive about the word “Hospitalizations” with COVID. We now know that hospitalizations arising from COVID are conflated with Hospitalizations with COVID, meaning those admitted due to an ankle sprain and discharged quickly are included in the “Hospitalizations” numbers. Clearly this is meaningless if the purpose of the analysis is to look at risk from COVID. The fact that someone with a sprained ankle had asymptomatic COVID has absolutely no interest for me. It Carrie’s zero information about health risk and potential outcomes for me.
On 2022-01-05 09:52:18, user Zacharias Fögen wrote:
Dear Authors,
Thank you for this in-depth analysis, yet you draw conclusions that lack physiological plausibility.
First, you draw conclusions from serum neutralisation to transmissibility. As any SARS-COV-2 variant replicates in nasopharyngeal epidermal tissue, and protection from these kind of viruses is solely reliant on sIgA or IgM but not on monomeric IgG, you cannot draw this conclusion.<br /> This has been well researched in animal coronavirus vaccinations and is also the reason why Polio vaccines are given p.o. instead of s.c. in endemic regions.<br /> Furthermore, you are describing the variant as hyper-transmissible, yet again, there is no physiological explanation for this. Entry into epithel cells is only a micrometer of the distance needed to move from one person to another. Changes in spike protein have no effect on the men-to-men transmission. <br /> Furthermore, you are relying on studies which do not control for differences in contacts between the group now infected with Omicron variant and those previously infected with Delta variant. There are less restrictions now in England and Denmark then when Delta variant arrived. Statistics also show that age group 20-30 is primarily infected with Omicron, and as those have the most variable contacts, there is clear indication of bias.
Best,
Zacharias Fögen
On 2022-01-05 16:22:39, user Ryan Anderson wrote:
How long until the booster protection equals zero?
On 2022-01-08 01:52:21, user Danuta Skowronski wrote:
When reporting paradoxical findings (e.g. negative vaccine effectiveness (VE)) based upon an observational study design, the first explanation to be considered is methodological bias (i.e. study weakness). Only after due diligence investigation of that most likely hypothesis can a possible biological effect (e.g. increased vaccine-associated risk) then be considered. The pre-print posted here does not provide that due diligence check.
An underlying requirement for valid VE estimation by any observational study (including the test-negative design) is comparable exposure risk between vaccinated and unvaccinated participants. However, vaccine passports have permitted broader social mixing by vaccinated compared to unvaccinated people. There is thus good reason to suspect that the vaccinated and unvaccinated are no longer at comparable likelihood of exposure. Higher exposure risk and therefore spuriously increased likelihood of vaccinated individuals contributing to the case series would negatively affect VE estimates due to behavioural rather than biological differences.
Another underlying requirement for valid VE estimation is comparable case ascertainment between vaccinated and unvaccinated participants. The test-negative design standardizes for the likelihood of being tested as an advantage over other observational (e.g. cohort) study designs but the likelihood of being found a case is not the same across multiple different reasons for being tested (i.e. testing indication), which may differ between vaccinated and unvaccinated people. Testing indications with different pre-test likelihoods of being positive include symptomatic illness vs. asymptomatic exposure vs. being part of an outbreak vs. routine pre-travel, workplace or pre-hospital admission screening etc. The recent deployment of rapid antigen testing, followed by confirmatory PCR testing, also affects VE estimates in uncertain ways. The pre-print posted here provides overall VE estimates against any infection in any age group, pooling these multiple testing indications. As such, selection bias remains one of the foremost explanations for their paradoxical findings.
We urge extreme caution before accepting paradoxical negative VE estimates at face value based on any observational study that has not addressed the above methodological issues.
Danuta M Skowronski MD, FRCPC<br /> BC Centre for Disease Control<br /> Vancouver, British Columbia<br /> Canada
and
Gaston De Serres MD, PhD<br /> Institut national de santé publique du Québec<br /> Quebec City, Quebec<br /> Canada
On 2022-01-05 19:42:22, user Christopher Hickie wrote:
Sorry, your paper is not valid. Omicron not dominant in US until week of Christmas so your 2-week sampling ending on Christmas week is way too premature. Please retract your preprint and wait several months to do this analysis.
On 2022-01-08 00:34:01, user AC wrote:
If ~40% of prevalent cases during the “omicron emergent” window are delta, why would severe outcomes be less than 40% what they were in the previous time window? Even if the severe effects were concentrated entirely among delta patients and none from omicron, these rates are still lower than expected. This suggests to me there could be an unidentified confounding factor.
On 2022-01-07 16:35:19, user Toby Koch wrote:
No adjustments for age, vaccination status, and variant for risk of hospitalization and death?
On 2022-01-08 03:26:25, user Neven Karlovac wrote:
Interesting article but the author's explanation of the negative infectivity seems arbitrary speculation and the article would be better without it.
On 2022-01-24 21:23:22, user KBNJ wrote:
Am I wrong in thinking it's entirely possible that the negative effectiveness of the vaccine for recovered people is real (biological), and not behavioral? Our bodies don't have unlimited immune resources. If vaccines induce an immune response that is less effective than recovered immunity at fighting an evolved variant (which various studies have shown at least for Delta), I would think this would be expected, and not a surprise.
On 2022-01-08 08:27:05, user Menno Schaap wrote:
Significant contribution! Self-tests are promoted for testing oneself before an event or visiting relatives. But in case the subject has no symptoms, reliability is only 22.6%. Therefore perceived feeling of safety is questionable. People should realize that.
On 2022-01-08 18:02:39, user Eithan Galun wrote:
We are waiting for comments and critics,<br /> Eithan Galun in the name of all authors
On 2022-01-10 09:16:10, user Roger Helgesen wrote:
How accurate are self-reported measures in determining actual illness?
Those who know that they have had a positive Covid test might report symptoms more often than those that know they have not had a positive Covid test due to confirmation bias.
On 2022-01-10 09:51:12, user RBNZ wrote:
typo : "Berlina and Bill Gates Foundation"
On 2022-01-13 17:45:28, user T S wrote:
There is a calculation error in Table S8 for the Hispanic noSGTF. The percentage listed in parenthesis is incorrect and appears to be a transposition of the line above it for Black non-hispanic.
Also, I would hate for this article to appear biased upon publication but listing the actual value for increased risk of Omicron infection as compared to Delta for people with prior Covid diagnosis "4.45 (3.24-6.12) fold higher" yet listing a general statement of just "higher" when presenting that same data for prior vaccines leads an astute reader to assume the authors are using framing to present a bias to the data in favor of vaccines. With our top health officials playing politician by misrepresenting studies and dodging questions it is ever more important that actual scientists and peer reviewed studies are above reproach or we risk further deteriation of the rapidly declining public trust in scientists and studies.
"Among cases first ascertained in outpatient settings, adjusted odds of documented prior SARS-CoV-2 infection >=90 days before individuals’ first positive test during the study period were 4.45 (3.24-6.12) fold higher among cases with Omicron variant infections than among cases with Delta variant infections infection (Figure 1; Table 2). <br /> Similarly, adjusted odds of prior receipt of each vaccine series (1, 2, or 3 doses of BNT162b2/mRNA-1973, or Ad.26.COV2.S with or without a booster dose of any vaccine) were higher among cases with Omicron as compared to Delta variant infections."
On 2022-01-16 04:06:03, user FreedomForEvar wrote:
I would like to know how many of these people in the Cohorts had previous infections le Natural immunity The Hospitals can tell and which did these people have Delta or Omicron?<br /> https://www.medrxiv.org/con...<br /> Death rate Los Angeles County Omicron 1 died out of 57,000 that is .0018% <br /> I recommend you take a look at this study that actually includes the Naturally immune START Acknowledging What has been around Since the Very beginning of Human life. <br /> Also <br /> Death rates per WHO<br /> 7 days ending Jan 11 <br /> World wide Death rate is .24% USA same time Death rate is .24% Same period For California Death rate is .09% <br /> 1st week of December USA Death rate is .90% per WHO<br /> Data from England and other countries Show that the Vaccinated are Catching Omicron/Delta virus 80% of the time<br /> it's time to figure out why the vaccine for Covid 19 Is no longer working<br /> What in Delta and what in Omicron Is causing this? What in the Vaccine Is causing this?
On 2022-01-19 19:56:15, user Victor Yman wrote:
This is a preprint of an article published in Nature Communications. The final authenticated version is available online at: https://doi.org/10.1038/s41...
On 2022-01-23 20:43:42, user ELSA GRENMYR wrote:
Did you verify that all patients infected with VOC-Delta and VOC-omicron were SARS-COV2 naive, or could there be a mix of convalescent and naive individuals? How would the infectious viral titres look in a re-infected cohort?
On 2022-02-01 12:17:22, user Daniel Anthony wrote:
While the primary endpoint may have been missed, the shorter illness course and attenuation of loss of smell and taste is very interesting and warrants further investigation. It also suggests that the mode of action is unlikely to be related to the inhibition of TMPRSS2 to block SPKIE activation.
On 2022-02-01 17:13:38, user Ilya Gordeychuk wrote:
A disclaimer. I'm employed at the Chumakov Center, the developer and manufacturer of CoviVac.
First of all, thank you for your work. Clinical description and assessment of cases of symptomatic SARS-CoV-2 infection in vaccinated people during the circulation of emerging virus variants are essential both for the general public and for the healthcare system.
Still, I think some interpretations require further clarification. The first two questions coming in mind after reading the paper are:
Did you do any genome sequencing of the virus isolates during your work? It looks now that you assume that those cases were all caused by the delta variant based only on the general epidemiological data saying that delta was predominant in St. Petersburg at the period of observation. If so, the title of the study may be a bit misleading, as it states that those cases were all delta.
You state throughout the manuscript that you see significant differences between the effectiveness of the three vaccines, but these interpretations are not supported by the data. <br /> Namely, you state in the abstract that "In contrast to other Russian vaccines, Gam-COVID-Vac is effective against symptomatic SARS-CoV-2 infection caused by Delta VOC", on Page 8 that "CoviVac usefulness is also doubtful" etc. At the same time there is no data in the paper supporting this statement. There is no statistical comparison between the CoviVac group and other vaccine groups. Moreover, I performed a statistical assessment of the data presented in Table A1 and there is no statistically significant difference between the CoviVac group and the Gam-COVID-Vac group, so the data presented in this table directly contradicts your interpretation of the data.
Best regards,<br /> Ilya Gordeychuk
On 2022-02-06 23:31:27, user Danilo Vieira wrote:
I consider that the lack of education in PGx among clinicians makes implementation difficult, but I don't think this barrier is 'extremely' relevant.
On 2022-02-21 11:05:51, user diveoceanos wrote:
Studies 4 through 6 are doing a matched-cohort analysis of Ct values between group 2 (unvaccinated and reinfected) and unvaccinated and infected individuals, individuals with breakthrough infections after BNT16262 vaccine and individuals with breakthrough infection after mRNA-1273 vaccine respectively.
Based on the data the mean Ct value is higher for the unvaccinated and reinfected individuals in all studies compared to the matched-cohort, with studies 4 and 5 reaching statistical significance, while in study 6 the P-value is at 0.104 indicating not a statistically significant difference.
In the text the authors are ranking the infectiousness in order of decreased magnitude in line with their findings i.e.
“The different comparisons suggest an overall hierarchy, present for both asymptomatic and symptomatic infections, where primary infections in unvaccinated persons are most infectious, followed by BNT162b2 breakthrough infections, mRNA-1273 breakthrough infections, and finally reinfections in unvaccinated persons.”
Figure 2 is clearly showing that reinfections are associated with higher Ct compared to all other studied groups.
However there is misleading information on tables 4 and 5. Specifically tables 4 and 5 are showing in the last two rows that infectiousness of breakthrough infections is less compared to infectiousness of reinfections in unvaccinated individuals:
• Infectiousness of BNT162b2-vaccine breakthrough infections relative to reinfections in unvaccinated individuals<br /> • Infectiousness of mRNA-1273-vaccine breakthrough infections relative to reinfections in unvaccinated individuals
Either the line descriptions should change to reflect the correct ratio (i.e. infectiousness of reinfections in unvaccinated individuals over the breakthrough infections or the relative infectiousness should be recalculated to reflect the line description.
On 2022-03-07 04:32:04, user Bidossessi Wilfried Hounkpe wrote:
Preprint status: This study is under review at Experimental Biology and Medicine.
On 2022-04-25 15:00:55, user Nevo Itzhak wrote:
This manuscript has been published in the Journal of Urban Health (DOI: https://doi.org/10.1007/s11524-021-00601-7). <br /> Please cite only the published version!
On 2020-03-29 18:14:17, user Sinai Immunol Review Project wrote:
Main findings:
This study examined<br /> the incidence of diarrhea in patients infected with SARS-CoV-2 across three<br /> recently published cohorts and found that there are statistically significant differences<br /> by Fisher’s exact test. They report that this could be due to subjective<br /> diagnosis criterion for diarrhea or from patients first seeking medical care<br /> from gastroenterologist. In order to minimize nosocomial infections arising<br /> from unsuspected patients with diarrhea and gain comprehensive understanding of<br /> transmission routes for this viral pathogen, they compared the transcriptional<br /> levels of ACE2 of various human tissues from NCBI public database as well as in<br /> small intestine tissue from CD57BL/6 mice using single cell sequencing. They<br /> show that ACE2 expression is not only increased in the human small intestine,<br /> but demonstrate a particular increase in mice enterocytes positioned on the<br /> surface of the intestinal lining exposed to viral pathogens. Given that ACE2 is<br /> the viral receptor for SARS-CoV-2 and also reported to regulate diarrhea, their<br /> data suggests the small intestine as a potential transmission route and<br /> diarrhea as a potentially underestimated symptom in COVID19 patients that must<br /> be carefully monitored. Interestingly, however, they show that ACE2 expression<br /> level is not elevated in human lung tissue.
Limitations of the Study:
Although this study demonstrates a statistical<br /> difference in the incidence of diarrhea across three separate COVID19 patient<br /> cohorts, their conclusions are limited by a small sample size. Specifically,<br /> the p-value computed by Fisher’s exact test is based on a single patient cohort<br /> of only six cases of which 33% are reported to have diarrhea, while the<br /> remaining two larger cohorts with 41 and 99 cases report 3% and 2% diarrhea incidence,<br /> respectively. Despite showing significance, they would need to acquire larger<br /> sample sizes and cohorts to minimize random variability and draw meaningful conclusions.<br /> Furthermore, they do not address why ACE2 expression level is not elevated in<br /> human lung tissue despite it being a major established route of transmission<br /> for SARS-CoV-2. It could be helpful to validate this result by looking at ACE2<br /> expression in mouse lung tissue. Finally, although this study is descriptive<br /> and shows elevated ACE2 expression in small intestinal epithelial cells, it<br /> does not establish a mechanistic link to SARS-CoV-2 infection of the host.<br /> Overall, their claim that infected patients exhibiting diarrhea pose an increased<br /> risk to hospital staff needs to be further substantiated.
Relevance:
This study provides a possible transmission route and a potentially underappreciated<br /> clinical symptom for SARS-CoV-2 for better clinical management and control of<br /> COVID19.
On 2020-04-01 13:40:40, user Sinai Immunol Review Project wrote:
Title: Correlation between universal BCG vaccination policy and reduced morbidity and mortality for COVID-19: an epidemiological study
Keywords: BCG vaccine – epidemiology – vaccination policy
Main findings:The authors compared middle and high income countries that never had a universal BCG vaccination policy (Italy, Lebanon, Nederland, Belgium) and countries with a current policy (low income countries were excluded from the analysis as their number of cases and deaths might be underreported for the moment). Countries that never implement BCG vaccination have a higher mortality rate than countries which have a BCG vaccination policy (16.38 deaths per million people vs 0.78). Next, the authors show that an earlier start of vaccination correlates with a lower number of deaths per million inhabitants. They interpret this as the vaccine protecting a larger fraction of elderly people, which are usually more affected by COVID-19. Moreover, higher number of COVID-19 cases were presented in countries that never implemented a universal BCG vaccination policy.
Limitations:While this study aims to test an intriguing hypothesis unfortunately, the data is not sufficient at this time to accurately make any determinations. Several caveats must be noted including: not all countries are in the same stage of the pandemic, the number of cases/deaths is still changing very rapidly in a lot of countries and thus the association may only reflect exposure to the virus. This analysis would need to be re-evaluated when all the countries are passed the pandemic and more accurate numbers are available. Additionally, very few middle and high-income countries ever implemented universal BCG vaccination, which can be a source of bias (5 countries, vs 55 that have a BCG vaccine policy). Effective screening and social isolation policies also varied considerable across the countries tested and may reflect another important confounder. The authors could consider analyzing the Case Fatality Rate (CFR, % of patients with COVID-19 that die), to more correct for exposure although testing availability will still bias this result. Variability in mortality within countries or cities with variable vaccination and similar exposure could also be appropriate although confounders will still be present.
Relevance:BCG vaccine is a live attenuated strain derived from Mycobacterium bovis and used for a vaccine for tuberculosis (TB). This vaccine has been proven to be efficient in preventing childhood meningitis TB, but doesn’t prevent adult TB as efficiently. For this reason, several countries are now only recommending this vaccine for at-risk population only.<br /> This study shows that there is a correlation between BCG vaccination policy and reduced mortality for Covid-19. Indeed, BCG vaccine has been shown to protect against several viruses and enhance innate immunity1, which could explain why it could protect against SARS-CoV-2 infection, but the exact mechanism is still unknown. Moreover, the efficiency of adult/older people vaccination and protection against Covid-19 still needs to be assessed. Regarding this, Australian researchers are starting a clinical trial of BCG vaccine for healthcare workers2, to assess if it can protect them against Covid-19.
Review by Emma Risson part of a project of students, postdocs and faculty at the Immunology Institute of the Icahn school of medicine, Mount Sinai.
On 2020-04-08 06:20:46, user Vincent Hare wrote:
Logarithmic differences in death rates are also partially explained by the fact that UM and H income countries had the virus first; and there is a clear lag between infections and deaths - of several weeks. On top of this, lower income countries with mandatory BCG programs have also pursued more aggressive lockdowns. Both biases - lag, and lockdown - need to be factored into this analysis BEFORE the effect is tested for statistical significance. Otherwise, the comparison is more or less meaningless.
On 2020-04-08 07:30:20, user Lara wrote:
It does not appear that the authors adjusted for number of tests conducted. There is a significant difference in the number of tests conducted in high income like the US (>2 million) versus LMIC like South Africa (1700 tests). Right now it can't be assumed that BCG is protective, when the full scope of the problem is not unknown, or in other words true case load is presented.
On 2020-06-30 16:50:34, user Rhyothemis wrote:
Was lysophosphatidylinositol (LPI) measured in this study? I don't see it in the text or figures. A study of SARS patients found elevated LPI:<br /> https://pubmed.ncbi.nlm.nih...
If anyone has information on LPI in covid, please post a reply.
On 2020-07-05 14:07:29, user mike marchywka wrote:
I noted my comments on twitter re this paper were piciked up here. You can see it referenced in context here, fwiw, <br /> https://www.linkedin.com/po...
On 2020-07-09 18:37:18, user K dawg wrote:
Nobody cares about the CFR because it is arbitrarily based on testing availability.
What is the Covid IFR? Looks to be around 0.1% from what I've seen... about like influenza.
On 2019-07-09 23:24:44, user Guyguy wrote:
EVOLUTION OF THE EBOLA EPIDEMIC IN THE PROVINCES OF NORTH KIVU AND ITURI
Tuesday, July 9, 2019
The epidemiological situation of the Ebola Virus Disease dated July 8, 2019:
Since the beginning of the epidemic, the cumulative number of cases is 2,428, of which 2,334 are confirmed and 94 are probable. In total, there were 1,641 deaths (1,547 confirmed and 94 probable) and 683 people healed.<br /> 322 suspected cases under investigation;<br /> 10 new confirmed cases, including 8 in Beni, 1 in Vuhovi and 1 in Oicha;<br /> 11 new confirmed case deaths:<br /> 5 community deaths, including 3 Beni, 1 in Vuhovi and 1 in Oicha;<br /> 6 deaths in Ebola Treatment Center including 3 in Beni, 1 in Mabalako, 1 in Butembo and 1 in Katwa.
The cumulative number of confirmed / probable cases among health workers is 128 (5% of all confirmed / probable cases) including 40 deaths.
NEWS<br /> Ebola Virus Disease in Uganda<br /> The Ministry of Health of the Republic of Uganda announced that all index case contacts have completed their mandatory 21-day follow-up period without developing signs of the disease. As a result, Ebola transmission in Kasese District was interrupted. As a reminder, the index case was a 5-year-old boy who had traveled with his mother to the burial of his grandfather who died of Ebola in Aloya, in the health zone of Mabalako.<br /> Uganda has strengthened its border surveillance system. Thus, all travelers coming from the DRC or having traveled to the DRC during the last 21 days must go through the sanitary control at Entebbe airport and at the various road and sea entry points of the country.<br /> Source: Ministry of Health press team on the state of the response to the Ebola epidemic in the Democratic Republic of Congo.
On 2019-07-23 17:47:15, user GuyguyKabundi Tshima wrote:
EVOLUTION OF THE EBOLA EPIDEMIC IN THE PROVINCES OF NORTH KIVU AND ITURI
Monday, July 22, 2019
The epidemiological situation of the Ebola Virus Disease dated 21 July 2019:
Since the beginning of the epidemic, the cumulative number of cases is 2,592, of which 2,498 are confirmed and 94 are probable. In total, there were 1,743 deaths (1,649 confirmed and 94 probable) and 729 people healed.<br /> 272 suspected cases under investigation;<br /> 14 new confirmed cases, including 10 in Beni, 2 in Mandima, 1 in Oicha and 1 in Mutwanga;<br /> 6 new confirmed case deaths:<br /> 3 community deaths including 1 in Beni, 1 in Mandima and 1 in Mutwanga;<br /> 3 deaths at the Ebola Treatment Center of Beni.
Change in coordination of the response to Ebola Virus Disease<br /> A new arrangement of the Presidency of the Republic announced this Saturday, July 20, 2019, the establishment of a technical secretary under the direct supervision of the Head of State to coordinate the response against the Ebola Virus Dpsease in North Kivu and Ituri. This technical secretary is headed by Professor Jean-Jacques Muyembe, who was also chairman of the laboratory committee in coordinating the current response since August 2018.<br /> As a result, all communications related to the response will now be managed directly by the Presidency.
Source: The press team of the Ministry of Health.
On 2020-02-10 08:17:02, user zjuliu wrote:
Firstly I think this should be a milestone for 2019 NCov research because of the work from Academician Zhong and his colleagues. But one thing I need to point out: this paper included patients from Wuhan, Hubei (except Wuhan) and other cities except Hubei, but as we know, it is very different on epidemiological trend, mortality etc when compared with these cohorts. Therefore, I think it is worthy to share this trend on public database so that the scholars can better use these data for further study.
On 2020-02-10 20:08:48, user Marc Bevand wrote:
93.6% of cases (1029 out of 1099) are still in the hospital. Their outcome (death or recovery) is not known yet. This is why the case fatality rate observed (1.4%) so far is low..
For comparison, the two other studies with 41 and 99 cases had only 17% and 58% cases still in the hospital at the time of their writing. More cases had resolved, this is why their case fatality rate was higher (15% and 11%).
On 2020-02-11 06:09:14, user ) Bill Ford wrote:
They do not know how long it remains potent only they have found 24 days. What is the total? No one yet knows
On 2020-02-12 07:04:04, user Marc Bevand wrote:
Table S3 in supplemental data says there are 3665 confirmed patients with 2019-nCoV infections. All the numbers in the table add up to 3665. However the rest of the preprint claims 4021 confirmed patients. What explains this discrepancy?
On 2020-02-13 16:20:21, user Xiaolin Zhu wrote:
We are the authors. We have retrained our model with confirmed cases by Feb. 11. We updated our prediction results. The total infections in mainland China would be 72172 by March 12, 2020 under current trend. It will be 149774 in the worst situation.
On 2020-02-17 09:19:59, user Ellie_K wrote:
Keep this assessment of R0 values when making comparisons to other contagious disease, via the CDC in Atlanta https://wwwnc.cdc.gov/eid/a...
On 2020-02-20 09:21:38, user Linh Ngoc Dinh wrote:
Thanks for sharing your research.<br /> Just a small comment: In an introductory graph, you said "In jurisdictions outside China (and excluding Hong Kong, Macao and Taiwan) the CFR as detailed in the 13 FebruaryWHO Report [3] was 1/447 = 0.22% (95% confidence interval (CI) = 0.40% to 1.26%)."<br /> This is quite misleading statement, WHO has never mentioned an estimate of CFR outside China until now. I think what cite here is the information that there was 1 death and 447 confirmed cases outside China. You should make this point clear, because as a reader, I feel like the number 0.22% (95%CI: .4%-1.26%) is what WHO said. <br /> Also, I wonder how you arrived to that 95%CI as we have only 1 point estimate.
Thanks much!
On 2020-02-21 00:19:53, user Leon Yuan wrote:
The link for Supplementary material is not available, please update it, thanks!
On 2020-02-25 12:11:41, user Igor Nesteruk wrote:
Dear colleagues,
Unfortunately, the coronavirus epidemic in Italy is developing very much like we have seen in mainland China (details in my preprint).
http://dx.doi.org/10.13140/...
Only very strict quarantine and safeguards can stop the spread of the infection throughout Europe.
May be, this information could be useful for your investigations.
I have found today the accumulated number of cases in Italy – 229 - on the official site of Italian Health Ministry.
http://www.salute.gov.it/po...
This point was already in the figure from
http://dx.doi.org/10.13140/...
We need the correct and reliable information about the accumulated number of cases. Do you have any links?
Be careful and healthy!
Sincerely yours,
Igor Nesteruk,
On 2020-02-26 19:26:13, user ricwerme wrote:
Did I miss the exported case counts the paper used to determine the internal # of cases, or is it just "three" with assumed missed cases?
The abstract says "suggesting a underlying burden of disease in that country than is indicated by reported cases." Should that be "a greater underlying..."?
On 2020-02-28 02:55:16, user Art Enquirer wrote:
Will it be possible for an AI startup (actually a hackathon team) to secure access to your data? Will you share for the sake of the world? We are also running servers with AI algorithms and wish to pre-test your conclusion.
On 2020-02-28 19:16:00, user Antoine Jomier wrote:
Hello, i am running an ai algorithm start up company. Would it be possible to share your model or data set so that we distribue it in France. We would not do any commercial exploitation but make it available widely to the community. My contact antoine.jomier@incepto-medical.com<br /> Thanks
On 2020-02-29 18:15:11, user Per Carlbring wrote:
For reference #21 a correction was published in 2016: https://journals.plos.org/p...
On 2020-03-05 09:13:52, user Jørgen K. Kanters wrote:
Important paper but needs to be improved to be a High flier. First around 50 % had a hypertensive history. In an American population that would mean hypertension is protective. You need an age gender matched control population from the same area to compare with. Furthermore you miss a very important point. Which medicine prescriptions had the patients before admission? ACE Inhibitors and A2 antagonists as the most interesting. Again compared to a control population
On 2020-03-05 16:05:56, user Erik Kulstad wrote:
Thank you for this data. You mention that you excluded patients with mild symptoms who had been transferred to mobile cabin hospitals (as well as patients who had been transferred to other hospitals for advanced life support), but were any of the patients with mild symptoms then re-admitted, to then become patients that were included in your 109 total (or are you able to track)?
On 2020-03-06 15:54:48, user Steven Ge wrote:
Please let us me know if you have any questions or suggestions. Twitter @StevenXGe
On 2020-03-13 11:54:03, user Murat Von Marit wrote:
For what temperature is the time you set?<br /> 4C? 20C? 40?<br /> And what about Textile,Fabric,Clothes?
On 2020-03-14 08:04:28, user Stefano Gaburro wrote:
Minimal viral titer for infection: thanks for this great piece of work that allows governmental bodies to give suggestion. One question: the viral titer decreases over time meaning it could be detected but no longer infectious. Have you determined the minimal titer of virus to determine an infection?
On 2020-03-14 19:27:30, user Halmartin Brown wrote:
Covid-19 Question: What is the risk of the virus being transmitted on paper and mail in general and packages? Are mail and package carriers being tested and are they using gloves? I read it can survive up to a day on cardboard and cash doesn't allow viruses to survive as long.
On 2020-03-15 18:34:58, user tusitw wrote:
Are you also going to study as a function of humidity?<br /> Below LOD, do we know it is still capable of infecting? a question in the same theme as Stefano Gaburro...
On 2020-03-25 17:42:51, user Rudolf Brüggemann wrote:
It is a bit irritating that version1 and version2 give different values for the half-lifes. Is there an error based on a factor ln10 somewhere? The half-lifes in Table 1 of the supplement of the published version are much smaller than those in Table 1 in the preprint version. E.g., half-life (median) for steel 13.1 hours in the preprint, median of 5.63 hours in Table 1 of the Supplement of the published version.
On 2020-03-17 23:37:34, user RunningThrough wrote:
Given the study cohort of patients are all hospital admitted patients there in Wuhan, presumably are all in the 'severe' and 'critical' category of all COVID-19 patients per admission policies that we read, so does this present data suggests that the SARS-CoV-2 virus has a higher infectivity amongst blood Gp A patients or that blood Gp A patients are more likely to develop a more severe disease?
On 2020-03-20 17:02:20, user Kevin Hamill wrote:
An editing suggestion:<br /> This manuscript will be read by new media/journalists therefore I would encourage more careful use of the term "significant".
For example where it says "blood group A had a significantly higher risk for COVID-19 " if that was quoted then people would hear this as "blood group A had a much higher risk for COVID-19."
In the same sentence, I would write:<br /> blood group A had approximately 20% higher risk for COVID-19 (odds ratio-OR, 1.20; 95% confidence interval-CI 1.02~1.43, P = 0.02). <br /> [and equivalent changes with the other phrases].
Note that as you have already stated the p values, using the word significant has no added value, it only provides a source for ambiguity.
On 2020-03-26 07:52:06, user M.E.Valentijn wrote:
Has anyone been able to verify their source claiming 33% prevalence of Type O in the general population? I can't find the journal that's cited for that, and a newer article says 30.2% for Han Chinese, not the nearly 34% claimed here. Though that's Han Chinese in general, not just in Wuhan. Can't find their other sources for normal blood types in the area either.
On 2020-04-15 18:32:13, user Jaime Navarro wrote:
There is a significant flaw in this paper's claim that Type A blood types are more susceptible to CoViD-19, and type O are less. In that the paper does not address the susceptibility of those with type B or AB blood. If as the paper suggests type O blood sees the virus as a type A antigen and so attacks the virus. Shouldn't the same happen in patients with Type B or AB? After all they would have antibodies to type A the same as type O people would.
On 2020-03-20 20:57:29, user Sylvie Vullioud wrote:
Could authors provide information to dissipate high risks of bias:
-> Pre-print on medRxiv is not a real pre-print to collect feed-back for manuscript improvement, as originally designed for. Moreover, medRxiv states: 'All preprints posted to medRxiv are accompanied by a prominent statement that the content has not been certified by peer review'.
-> There is an obvious potential conflict of interest, because last author Raoult is editor of the article collection COVID-19 Therapeutic and Prevention in International Journal of Antimicrobial Agents.
-> International Journal of Antimicrobial Agents is runned by Elsevier, suggesting 'If accepted for publication, we encourage authors to link from the preprint to their formal publication via its Digital Object Identifier (DOI)'.
Discussion on the controversy of main cited Chinese paper, ref 8 ?
According to paper, allocation of patients group was random but treated group is 51.2 years average and control group 37.3 years?
Article describes 3 conditions of patients: asymptomatic, low and high symptoms. Why?
Care to patients, biological and physiological sampling and analyses, and statistical analyses were not blinded. Why?
I think that no placebo was used. Why?
6 patients on total of 42 were excluded from study: three patients were transferred to intensive care unit, 1 stopped because of nausea, 1 died. One left hospital. <br /> It is written :'study results presented here are therefore those of 36 patients (20 hydroxychloroquine-treated patients and 16 control patients). Why were dead, intensive care, and nausea patients not included in statistical treatment? <br /> -> This may be a selection bias? <br /> -> What about unwanted very worrying effects of the treatment?
'The protocol, appendices and any other relevant documentation were submitted to the French National Agency for Drug Safety (ANSM) (2020-000890-25) and to the French Ethic Committee (CPP Ile de France) (20.02.28.99113) for reviewing and approved on 5th and 6th March, 2020, respectively'. Pre-print was posted on 20.03.2020. Time points on day 14 on patients.<br /> -> So recruitment and study started before approval of ANSM and French Ethic Committee? How is it possible?
How is it plausible that numerous authors (18!) participated equally to the work? Is it possible to add their respective contributions?
Thank you in advance for considering my questions. <br /> Regards, <br /> Sylvie Vullioud
On 2020-03-22 04:52:08, user Juan B. Gutierrez wrote:
In summary, provided that our Ro is correct, and we are certain it is, as we reused very long results from our recent peer-reviewed result, https://doi.org/10.1007/s11... Bulletin of Mathematical Biology (the premier venue for the discipline), then with the information that we have today, Ro cannot be close to 3.
By a suggestion of Dr. Jeremy Faust, MD, Brigham and Women's Hospital, @jeremyfaust, I modified the most uncertain parameters to produce an Ro of 3.These parameters are the mean infectious periods for symptomatic (lambda_yr) and asymptomatic (lambda_ar) subjects. If we consider the median of the other parameters to be correct (there is more data), then the mean infectious period of a symptomatic patient should be 4.9 days, and the mean infectious period of an asymptomatic should be 4.1 days. These numbers do not match what is happening on the ground. If we reduce alpha, the probability of becoming asymptomatic upon infection, to something less than 0.86, e.g. alpha = 0.5, then the mean infectious period of a symptomatic patient should be 3.7 days, and the mean infectious period of an asymptomatic should be 3.1 days.
The reality is that patients are infectious before the onset of symptoms, and the disease lasts more than 3 days in symptomatic patients. The necessary conclusion is that via a computational reductio ad absurdum, and with the information we have today, Ro cannot be close to 3.
On 2020-03-23 21:39:01, user Shayan wrote:
wondering what the 4000+ test results refers to with there only being 28 patients? looking at the distribution plots, there seem to be more than 28 data points per biomarker
On 2020-03-24 14:55:05, user syed aleem wrote:
Excellent paper! Any particular reason to express RBD as a secreted protein?
On 2020-03-25 22:43:03, user Sinai Immunol Review Project wrote:
Title: A serological assay to detect SARS-Cov-2 seroconversion in humans
Immunology keywords: specific serological assay - ELISA - seroconversion - antibody titers
Note: the authors of this review work in the same institution as the authors of the study<br /> Main findings: <br /> Production of recombinant whole Spike (S) protein and the smaller Receptor Binding Domain (RBD) based on the sequence of Wuhan-Hu-1 SARS-CoV-2 isolate. The S protein was modified to allow trimerization and increase stability. The authors compared the antibody reactivity of 59 banked human serum samples (non-exposed) and 3 serum samples from confirmed SARS-CoV-2 infected patients. All Covid-19 patient sera reacted to the S protein and RBD domain compared to the control sera.<br /> The authors also characterized the antibody isotypes from the Covid-19 patients, and observed stronger IgG3 response than IgG1. IgM and IgA responses were also prevalent.
Limitations of the study:The authors analyzed a total of 59 control human serum samples, and samples from only three different patients to test for reactivity against the RBD domain and full-length spike protein. It will be important to follow up with a larger number of patient samples to confirm the data obtained. Future studies will be required to assess how long after infection this assay allow to detect anti-CoV2 antibodies. Finally, while likely, the association of seroconversion with protective immunity against SARS-Cov-2 infection still needs to be fully established.
Relevance: <br /> This study has strong implications in the research against SARS-Cov-2. First, it is now possible to perform serosurveys and determine who has been infected, allowing a more accurate estimate of infection prevalence and death rate. Second, if it is confirmed that re-infection does not happen (or is rare), this assay can be used as a tool to screen healthcare workers and prioritize immune ones to work with infected patients. Third, potential convalescent plasma donors can now be screened to help treating currently infected patient. Finally, the recombinant proteins described in this study represent new tools that can be used for further applications, including vaccine development.
Review part of a project by students, postdocs and faculty at the Immunology Institut of the Icahn School of Medicine, Mount Sinai.
On 2020-03-24 18:03:39, user Sinai Immunol Review Project wrote:
This study is a cross-sectional analysis of 100 patients with COVID-19 pneumonia, divided into mild (n = 34), severe (n = 34), and critical (n = 32) disease status based on clinical definitions. The criteria used to define disease severity are as follows:
Severe – any of the following: respiratory distress or respiratory rate >= 30 respirations/minute; oxygen saturation <= 93% at rest; oxygen partial pressure (PaO2)/oxygen concentration (FiO2) in arterial blood <= 300mmHg, progression of disease on imaging to >50% lung involvement in the short term.
Critical – any of the following: respiratory failure that requires mechanical ventilation; shock; other organ failure that requires treatment in the ICU.
Patients with pneumonia who test positive for COVID-19 who do not have the symptoms delineated above are considered mild.
Peripheral blood inflammatory markers were correlated to disease status. Disease severity was significantly associated with levels of IL-2R, IL-6, IL-8, IL-10, TNF-?, CRP, ferroprotein, and procalcitonin. Total WBC count, lymphocyte count, neutrophil count, and eosinophil count were also significantly correlated with disease status. Since this is a retrospective, cross-sectional study of clinical laboratory values, these data may be extrapolated for clinical decision making, but without studies of underlying cellular causes of these changes this study does not contribute to a deeper understanding of SARS-CoV-2 interactions with the immune system.
It is also notable that the mean age of patients in the mild group was significantly different from the mean ages of patients designated as severe or critical (p < 0.001). The mean patient age was not significantly different between the severe and critical groups. However, IL-6, IL-8, procalcitonin (Table 2), CRP, ferroprotein (Figure 3A, 3B), WBC count, and neutrophil count (Figure 4A, 4B) were all significantly elevated in the critical group compared to severe. These data suggest underlying differences in COVID-19 progression that is unrelated to age.
Given the inflammatory profile outlined in this study, patients who have mild or severe COVID-19 pneumonia, who also have any elevations in the inflammatory biomarkers listed above, should be closely monitored for potential progression to critical status.
On 2020-03-26 16:57:03, user S Weeth wrote:
How about soap? Any testing on with soap?
On 2020-03-27 15:01:22, user Sinai Immunol Review Project wrote:
These authors looked at 17 hospitalized patients with COVID-19 confirmed by RT-PCR in Dazhou, Sichuan. Patients were admitted between January 22 and February 10 and the final data were collected on February 11. Of the 17 patients, 12 remained hospitalized while 5 were discharged after meeting national standards. The authors observed no differences based on the sex of the patients but found that the discharged patients were younger in age (p = 0.026) and had higher lymphocyte counts (p = 0.005) and monocyte counts (p = 0.019) upon admission.
This study is limited in the sample size of the study and the last data collection point was only one day after some of the patients were admitted.
These findings have been somewhat supported by subsequent studies that show that older age and an immunocompromised state are more likely to result in a more severe clinical course with COVID-19. However, other studies have been published that report on larger numbers of cases.
On 2020-03-27 20:04:02, user Sinai Immunol Review Project wrote:
The authors present a digital PCR (dPCR) diagnostic test for SARS-CoV-2 infection. In 103 individuals that were confirmed in a follow-up to be infected, the standard qPCR test had a positivity rate of 28.2% while the dPCR test detected 87.4% of the infections by detecting an additional 61 positive cases. The authors also tested samples from close contacts (early in infection stage) and convalescing individuals (late in infection stage) and were able to detect SARS-CoV-2 nucleic acid in many more samples using dPCR compared to qPCR.
The authors make a strong case for the need for a highly sensitive and accurate confirmatory method for diagnosing COVID-19 during this outbreak and present a potential addition to the diagnostic arsenal. They propose a dPCR test that they present has a dramatically lower false negative rate than the standard RT-qPCR tests and can be especially beneficial in people with low viral load, whether they are in the earlier or later stages of infection.
On 2020-03-28 18:07:46, user Ian Timaeus wrote:
I may be being very stupid, but isn't the ACFR formula given in the preprint wrong? Aren't you simply averaging the age-specific CFRs? So don't you want to multiply their sum by n/100, i.e. divide by the number of age intervals, not multiply by the width of those intervals? As an alternative, you could standardise the age-specific CFRs on the age-sex distribution of Italy, rather than on a uniform age distribution, so that the adjusted CFR equated to the CFR if incidence were constant by age and sex.
On 2020-03-30 15:27:50, user Sinai Immunol Review Project wrote:
Summary and key findings: Summary of clinical trials registered as of March7, 2020 from U.S, Chinese, Korean, Iranian and European registries. Out of the 353 studies identified, 115 were selected for data extraction. 80% of the trials were randomized with parallel assignment and the median number of planned inclusions was 63 (IRQ, 36-120). Most frequent therapies in the trials included; 1) antiviral drugs [lopinavir/ritonavir (n-15); umifenovir (n=9); favipiravir (n=7); redmesivir (n=5)]; 2) anti-malaria drugs [chloroquine (n-11); hydroxychloroquine (n=7)}; immunosuppressant drugs [methylprednisolone (n=5)]; and stem cell therapies (n=23). Medians of the total number of planned inclusions per trial for these therapies were also included. Stem cells and lopunavir/ritonavir were the most frequently evaluated candidate therapies (23 and 15 trials respectively), whereas remdesivir was only tested in 5 trials but these trials had the highest median number of planned inclusions per trial (400, IQR 394-453). Most of the agents used in the different trials were chosen based on preclinical assessments of antiviral activity against SARS CoV and MERS Cov corona viruses.
The primary outcomes of the studies were clinical (66%); virological (23%); radiological (8%); or immunological (3%). The trials were classified as those that included patients with severe disease only; trials that included patients with moderate disease; and trials that included patients with severe or moderate disease.
Limitations: The trials evaluated provided incomplete information: 23% of these were phase IV trials but the bulk of the trials (54%) did not describe the phase of the study. Only 52% of the trials (n=60) reported treatment dose and only 34% (n=39) reported the duration. A lot of the trials included a small number of patients and the trials are still ongoing, therefore no insight was provided on the outcome of the trials.
Significance: Nonetheless, this review serves as framework for identifying COVID-19 related trials, which can be expanded upon as new trials begin at an accelerated rate as the disease spreads around the world.
On 2020-03-31 19:02:12, user earonesty wrote:
It's an immunomodulator, it prevents some of the inflammation issues associated with COVID-19. Not a surprising result. There may be better ones, but since this is used to treat asthma, and other issues pulmonary inflammation, it's a good choice.
On 2020-04-07 23:44:19, user Ronaldo Wieselberg wrote:
I see a lot of problems with this study.
First of all, it does not mention whether risk factors - such as hypertension or diabetes - were taken into account. The table provided for randomization doesn't show them either. Having a difference about risk factors could play a huge part in the difference - let's consider that control group had more people with pre-existing conditions, for instance, thus, the risk of evolving to a severe disease would be improved, independently of HCQ. Moreover, there is no mention of whether the four individuals who progressed to severe disease had any pre-existing conditions, needed mechanical ventilation or any other details of the "severity" - could it be a SpO2 of 92% only, accordingly to inclusion criteria.
The paper does not describes clearly the evaluation criteria. How was measured the cough? Was it dicotomic (have cough/don't have any cough)? How could you determine whether the pneumonia "improved"? Was it accordingly to the presence/absence of infiltrates on X-Rays, or % of lungs compromised in CT-scan? Thus, calculating a significative p value for subjective criteria is really, really a difficult point.
It does not states, as well, the symptoms duration prior to admission and start of the intervention. People who had, for instance, 10 days of symptoms before looking for a physician could have fewer days of symptoms than another individual who looked for medical assistance with two days of symptoms - and there is no mention about this.
On 2020-03-31 22:51:51, user Ruth Etzioni wrote:
When will the actual description of the model be available?
On 2020-03-31 22:59:25, user Whiskers wrote:
Even more worrying if it is air spread, we have been led to believe that it is only really contact spread unless someone coughs directly over you.<br /> Perhaps this accounts for the prolific spread of this disease.
On 2020-04-01 14:34:11, user Sinai Immunol Review Project wrote:
Summary: ?Retrospective study on 97 COVID-19 hospitalized patients (25 severe and 72 non-severe) analyzing clinical and laboratory parameter to predict transition from mild to severe disease based on more accessible indicators (such as fasting blood glucose, serum protein or blood lipid) than inflammatory indicators. In accordance with other studies, age and hypertension were risk factors for disease severity, and lymphopenia and increased IL-6 was observed in severe patients. The authors show that fasting blood glucose (FBG) was altered and patients with severe disease were often hyperglycemic. Data presented support that hypoproteinaemia, hypoalbuminemia, and reduction in high-densitylipoprotein (HDL-C) and ApoA1 were associated with disease severity. ?
Limitations: ?In this study non-severe patients were divided in two groups based on average course of the disease: mild group1 (14 days, n=28) and mild group 2 (30 days, n=44). However mild patients with a longer disease course did not show an intermediate phenotype (between mild patients with shorter disease course and severe patients), hence it is unclear whether this was a useful and how it impacted the analysis. Furthermore, the non-exclusion of co-morbidity factors in the analysis may bias the results (e.g. diabetic patients and glucose tests) It is not clear at what point in time the laboratory parameters are sampled. In table 3, it would have been interesting to explore a multivariate multiple regression. The correlation lacks of positive control to assess the specificity of the correlation to the disease vs. correlation in any inflammatory case. The dynamic study assessing the predictability of the laboratory parameter is limited to 2 patients. Hence there are several associations with disease severity, but larger studies are necessary to test the independent predictive value of these potential biomarkers.?
Findings implications:? As hospital are getting overwhelmed a set of easily accessible laboratory indicators (such as serum total protein) would potentially provide a triage methodology between potentially severe cases and mild ones. This paper also opens the question regarding metabolic deregulation and COVID-19 severity.
On 2020-04-03 05:01:21, user Jacob G Scott wrote:
Please find our update, with HIGHER recommended exposure times for porous PPE, on our github repo: https://github.com/TheoryDi...
We expect another update in the coming days with filtration/fit testing results at these exposures, as well as biologic validation.
Please also see recent CDC guidelines: https://www.cdc.gov/coronav...
and a cooperative groups recommendation for N95 decontamination: https://www.n95decon.org/
Please stay safe and healthy.
On 2020-04-17 19:08:16, user Jacob G Scott wrote:
See github repo for live updating and opportunities to contribute data: https://github.com/TheoryDi...
On 2020-04-04 19:53:38, user VirusWar wrote:
It is interesting study, but I'm surprised you don't talk about level of Potassium in the blood ? Did you check it ? Was there any Magnesium given to correct level of Potassium. Especially, you noted heart troubles with people having renal disease, but it is well known they have excess of Potassium which creates such heart troubles. Also, was treatment H+A stopped when QTc >=500ms ?
On 2020-04-07 19:27:46, user Archisman Mazumder wrote:
Indian study showing COVID-19 affects the 20-39 yrs age group most in India. This really has to be studied further. Even the Health Ministry of India corroborated the findings.
On 2020-04-15 18:26:55, user Archisman Mazumder wrote:
These predictions actually matched.
On 2020-04-12 05:17:40, user John Roberts wrote:
Cytodyn’s drug Leronlimab is proving effective in treating the cytodyne storm and getting severe Covid patients off of ventilators.
On 2020-04-12 08:33:12, user tsuyomiyakawa wrote:
Thanks, everyone, for your precious comments.
We are examining the potential confounders, which includes the ones mentioned here.
As Rosemary mentioned, BCG is an attenuated version TB and, indeed, big protective effect of TB prevalence against COVID-19 exists. We will incorporate the data in the next version.
We obtained the data from the web site of European Centre for Disease Prevention and Control, and are re-analyzing the growth of spreading in a more quantitive manner. Basically, there are significant effects of BCG/TB against COVID-19 growth, which will replace the data shown in Figure 3.
Regarding the tourists from China, according to a survey, the top 10 destination countries of China’s out bound countries are Japan, Thailand, South Korea, Indonesia, Singapore, Malaysia, Australia, UK, New Zealand, and Maldives, and 9 out of 10 of them are the ones with extremely low COVID-19 cases and deaths (4 or lower deaths per million) , as of April 13th, which makes it unlikely that the Traveling activity from China matters. This will be added to the discussion. Also, we evaluated the number of international arrivals in each country and it did not essentially affect the results (almost at all).
As for masks and green tea, they cannot explain 1) the differences between Eastern Europe and Western Europe and 2) low COVID-19 indices in Africa, South America and South East Asia. We may consider their potential effect, once we can get any good statistics representing those things, but so far, we set priority low for these potential confounders.
Anyway, we will upload next version sometime in next week and it will be appreciated if you could keep providing us critical comments, which will greatly improve our manuscript. Thank you!
On 2020-04-12 13:23:51, user Joe Gitchell wrote:
Thank you to the authors for taking the time and effort to report these findings in the midst of confronting the challenges from managing the COVID19 pandemic. Please stay safe!
And it is with humility that I make a request to them to do two things with their data on tobacco use. The first I think should be pretty straightforward, the second will depend on the specificity available within the Epic records:
1) In Tables 1 and 2, can you please break out "Never" and "Unknown" in to separate categories; and
2) In these tables and in other analyses, can you please break out "tobacco use" at least in to combustible (cigarettes, cigars, cigarillos, hookah, etc) and noncombustible (smokeless tobacco, snus, vaping) categories?
Thank you. I also found your use of CART/decision-tree analysis really helpful, btw.
Joe
Disclosures:<br /> My employer, PinneyAssociates, provides consulting services on tobacco harm minimization on an exclusive basis to JUUL Labs, Inc., a manufacturer of nicotine vaping products. I also own an interest in an improved nicotine gum that has neither been developed nor commercialized.
On 2020-04-13 00:38:02, user Craig wrote:
The safety of HCQ alone has already been proven. It's the efficacy, both as prophylaxis and as treatment, that needs to be studied.
Studying HCQ in combination with a drug that is already known to have adverse cardiac effects seems like a study designed to produce data with a negative bias.
On 2020-04-13 13:45:53, user Rosemary TATE wrote:
Hi, I dont see the STROBE (for observational studies) guidelines checklist uploaded, although you ticked yes to this<br /> "I have followed all appropriate research reporting guidelines and uploaded the relevant EQUATOR Network research reporting checklist(s) and other pertinent material as supplementary files, if applicable. "<br /> A lot of people seem to ignore these but they are important and any good journal will require them.<br /> Can you please upload? Many thanks.
On 2020-04-14 08:27:08, user Lisa Kane wrote:
Can the authors comment on the role air conditioning and/or building/residence heating may have played in the cooler cities?
That is, are cooler, measured city temperatures actually proxies for warm, indoor temperatures?
On 2020-04-14 14:03:10, user Chris Pericone wrote:
The death data in table 3 are the reverse of what is described in the results section. I assume the high and low dose columns were mistakenly switched in that table?
On 2020-04-14 18:23:20, user Phyllis Bergiel wrote:
Here's a dissociation statement form Rockelfeller: https://www.rockefeller.edu...
On 2020-04-15 14:34:21, user Bio wrote:
I have several issues with this study:
I find not including time as a factor in the model bewildering. After all, time is the single most important factor for the number of cases for most of the countries in the model. The model is only log(cases) ~ population + temperature. But for example, in half a month's time, population and temperature won't change much, while the number of cases could increase several fold for some countries. Time is a critical factor to model and is more important than temperature and population. Not including time, the model in the paper cannot be stable. Basically as time changes, your conclusions likely will change.
Many other important factors were not considered. For example, at what point of COVID-19 growth is each country at? If one compares March 14 to March 27, China's numbers are not much different while USA and many other countries have quite different numbers on those 2 days. The model cannot be stable due to this as well (time + point during growth). Also what about containment policy causing slower growth? Effects from such important confounding factors were not considered in the model.
There are many other smaller issues such as USA cases were mostly in Northeast, with latitude clearly higher than the one used to represent USA. In China, the cases happened in many provinces with vastly different latitude/temperature but all cases counted at one latitude/temperature. Moreover, the vast majority of the cases happened in China during Jan/Feb while you used late March temperature to represent them. Inaccuracies like these seriously impact modeling latitude/temperature as a continuous variable. Excluding countries with small population but high case rate as outliers is also questionable, given that you modeled population size already.
Fig. 7 could benefit from being plotted also for 2/29, 3/14, 3/21. Interpretation of Fig. 7 could instead be that it showed 2 groups of countries/latitudes that reflects the temporal sequence of events rather than temperature: COVID-19 started in China, spread to South Korea and Italy, then Europe and America as they trade/travel often and happen to all be cold countries; then in March COVID-19 picked up in southern hemisphere and tropical area and still going. It's likely time and policies that helped Australia case rate be relatively low instead of temperature, because the spread of COVID-19 is still early there and they learned from other countries to control the spread from early on.
Therefore I do not believe the paper provided convincing evidence for temperature-dependency of COVID-19.
On 2020-04-15 18:51:04, user Greg Lambert wrote:
The study seems to incorrectly use a UVAB meter instead of a UVC meter to measure the exposure from their UVC lamp, consequently their UV exposure readings are wrong(low).
From the study:
"Ultraviolet light. Plates with fabric and steel discs were placed under an LED high power UV germicidal lamp (effective UV wavelength 260-285nm) without the titanium mesh plate (LEDi2, Houston, Tx) 50 cm from the UV source. At 50 cm the UVAB power was measured at 5 u W/cm2 using a General UVAB digital light meter (General Tools and Instruments New York, NY)."
Their lamp emits effective UV wavelength of 260-285nm but a General UVAB meter only measures from 280 to 400 nm with a calibration point of 365nm.
A
On 2020-04-16 01:47:35, user 777Rampage wrote:
I have the following questions and issues on their testing of UVC LED.<br /> 1. In the article it states that General Tools UVAB digital light meter was used. Is that the UV513AB meter? If so, that monitoring probe is only able to measure UVA&B, not C. <br /> 2. If using General Tools UV512C meter, that probe's spectral range is 220 to 275 and it is used to measure low pressure mercury UVC lamp, not UVC LED.
On 2020-04-15 20:08:45, user davidmeiser wrote:
No Evidence of Rapid Antiviral Clearance or Clinical Benefit with the Combination of Hydroxychloroquine and Azithromycin in Patients with Severe COVID-19 Infection https://www.sciencedirect.c...
On 2020-04-15 23:59:04, user Mark .Minnery wrote:
The average time between symptom onset and randomisation was 16.6 days. Could the authors discuss the implications of this potential confounder. Was the long time before treatment because of delay between symptoms and presentation at hospital?
On 2020-04-16 09:05:11, user Stef Verlinden wrote:
Please study the paper thorougly before jumping to conclusions. Standard of care means that (most of the) patients were also treated with lopinavir-ritonavir, arbidol, oseltamivir, virazole, entecavir, ganciclovir and/or interferon-alpha.
The autors also did a post-hoc subanalyses with patients who did not get other medications other than HCQ in the treatment group or nothing in the SOC group. And, for what it is worth, here they found ‘a significant efficacy of HCQ on alleviating symptoms’.
What also could be of interest is that patients treated with HCQ showed a significantly higher reduction of CRP. One of the proposed MOA for HCQ is an anti-inflammatory one.
All in all a very weak study from wich not much can be concluded
On 2020-04-15 22:33:38, user suradip das wrote:
Very interesting work. I have some questions -<br /> 1. Page 15 (Table 1): C-Reactive Protein is an indicator of cardiovascular disease. It is interesting that the authors chose to conduct the study in a population where 86% of all the patients had high CRP.<br /> 2. Out of the 84 patients receiving HCQ (90.5% having CRP>40mg/l and 45.2% having cardiovascular diseases) only 3 patients died (3.6%). In comparison, the group which did not receive HCQ but had similar weight proportions of high CRP and CVD saw 4.1% mortality.<br /> To summarize, there appears to be no significant difference in mortality rates when patients with CVD and COVID-19 are treated with HCQ versus a placebo.
On 2020-04-16 11:23:18, user Rajesh Ranjan wrote:
Please read the abstract in the original paper (PDF). There is a slight misprint in the web content.
On 2020-04-16 21:55:50, user Sinai Immunol Review Project wrote:
Title:<br /> Immunopathological characteristics of coronavirus disease 2019 cases in Guangzhou, China<br /> The main finding of the article: <br /> This study analyzed immune cell populations and multiple cytokines in 31 patients with mild/moderate COVID-19 (ave. 44.5 years) and 25 with severe COVID-19 (ave. 66 years). Samples from patients with fever and negative for the SARS-COV-2 test were used as control. At inpatient admission, total lymphocytes number was decreased in severe patients but not in mild patients, whereas neutrophils were increased in severe patients. CD4+ and CD8+ T cells were diminished in all COVID-19 patients. CD19+ B cells and NK cells were decreased in both mild and severe patients, however, severe patients showed a notable reduction. These data might suggest a profound deregulation of lymphocytes in COVID-19 patients. Further analysis showed significant increases of IL-2, IL-6, IL-10 and TNF? in blood of severe patients at the admission. Sequential samples revealed that IL-2 and IL-6 peaked on day 15-20 and declined thereafter. A moderate increase of IL-4 was seen in mild/moderate patients. Thus, elevation of IL-2, IL-6 can be indicators of severe COVID-19.<br /> Critical analysis of the study: <br /> There is no information on when the patients were assessed as severe or mild/moderate, at inpatient admission or later. The authors could have analyzed the correlation between immune cell population and cytokine levels to see, for example, if severe lymphopenia correlated to higher elevation of IL-2.<br /> The importance and implications for the current epidemics:<br /> While similar findings have already been shown, the data corroborates alterations in circulating adaptive and innate immune cell populations and cytokines, and its correlation to disease severity. The increase of IL-2 and IL-6 at the admission might an indicator to start intensive therapies (like convalescent serum) at an early time.
On 2020-04-16 23:52:26, user Geoff wrote:
Why were they bronching these patients?? Sounds incredibly dangerous.
On 2020-04-18 19:50:29, user Oliver Van Oekelen wrote:
"Raw data will be available in GEO."
When will the data be uploaded? This preprint was posted almost two months ago. Would be amazing to fuel collaborative efforts across the globe and increases the impact of this work...!<br /> Thanks
On 2020-04-17 18:50:31, user thecity2 wrote:
The sample could be biased by a self-selection effect. People on FB who wanted to be tested because they think they had the virus at some point.
On 2020-04-17 20:13:28, user AM wrote:
thanks for the information. you did not specify if you found the IgM or the IgG antibody which would allow us to know what stage they are in on the time frame of when someone sero-converts. it would be great to conduct the same study now, or even 2 weeks from now to see the changes and the transition to herd immunity. ideally, you have kept track of all the people you tested. we could derive a wide range of conclusions from this one time test. nonetheless, thanks for taking the time to conduct these tests.
On 2020-04-17 22:27:44, user Julie Larsen Wyss wrote:
Have the participants been informed if they were positive yet?
On 2020-04-18 02:30:01, user Don Phan wrote:
This does not appear to be random sampling. If you are using volunteers, then the sampling is not representative of Santa Clara country. What have you done to correct this? And I am not talking about demographics. The people who suspected that they had the virus would be the most likely to risk the shelter at home, drive to location, and participate.
On 2020-04-18 14:03:13, user Richard Davis wrote:
Could you guys make the data and code available for other researchers to try and replicate, now that the paper is out there?
On 2020-04-18 15:48:14, user Jingeol Lee wrote:
I hope this method extends to general application on forecasting the spread!
On 2020-04-18 20:15:11, user Marya Lieberman wrote:
table 1 lists one entry for NPV as 32/25 and gives a value of 91%. Looks like a transcription or math error.
On 2020-04-18 20:25:12, user Scott Howell wrote:
Conceptually this study and its conclusions are patently false. Internal endogenous hormone levels do not equate to exogenously administered androgens. Perhaps a read of Morgentaler's saturation model would enlighten the authors. A statement that long-term elevated free testosterone levels causes prostate cancer does not align to either the saturation model or the use of bipolar androgen therapy at supraphysiologic doses to treat prostate cancer. There is a big jump from Mendelian randomization to nuances in physiology and what occurs in practice. Just like secondary data analysis of insurance claims to establish risk should be banned or at least relegated to their limitations, these type of studies drawing conclusions outside of the scope of the data should be relegated to their limitations or even less.
On 2020-04-21 08:35:42, user Shrisha Rao wrote:
On 2020-11-19 20:34:46, user Hilja Gebest wrote:
Thank you for this study. The serum level aimed for of 30ng/ml is sufficient for bone health but the immune system needs higher levels, at least 40 if not 60ng/mL. More importantly, as the dose was late in the illness, in addition to being quite low. Single bolus doses of vitamin D3 are rarely effective as an intervention particularly when administered without the cofactors: (Mag, zinc, boron, vitamin B + K2 complex and Omega-3). Bolus doses of vitamin D3 start becoming effective up around 500,000 IU and there must be followup with a maintenance dose of at least 10,000 IU/day vitamin D3. The D3 group was disadvantaged by means of many values and risk factors, the three main ones known to us - hypertension, Diabetes II and COPD - by a factor of more than 4:3 vs the Placebo group.
On 2020-11-22 01:11:32, user Mahan Ghafari wrote:
Your phylogenetic analysis is flawed: you cannot estimate a unique TMRCA for two independent introductions like this. Your constructed phylogenetic tree (fig.4) is blatantly incorrect (what's going on with the branches on the red clade B1??). I suggest you retract the preprint immediately and correct the fatal flaws in your analysis. Also, you are not the first group to study the phylogenetics of Iran and you should appropriately acknowledge earlier contributions.
On 2020-11-23 07:29:44, user Andreas Haas wrote:
The peer-reviewed article has been published in JIAS: https://onlinelibrary.wiley...
On 2020-11-25 02:49:25, user Dr Bishnu Mohan Singh wrote:
This preprint has been published in peer-reviewed journal Hindawi: Advances in Preventive Medicine. DOI: https://doi.org/10.1155/202...
On 2020-12-07 14:47:02, user Mark S Perry wrote:
Although you looked for any correlation with self reported BMI I’m wondering if the possible F/M difference might be down to a weight related threshold for a nutrient/drug? As with low dose Aspirin, effective in 1ry prevention of IHD in women - but not (heavier) men
On 2020-12-08 21:22:40, user Michal Piják wrote:
DOUBTS ABOUT THE EFFECTIVENESS OF MASS TESTING OF ASYMPTOMATIC POPULATION FOR CORONAVIRUS (SARS-CoV-2) IN SLOVAKIA
Indeed, it might seem that the number of positive PCR tests / per day, per million inhabitants two weeks after the nationwide testing of the whole country in Slovakia has started to slowly decrease. However, this declining trend may be skewed by significantly less testing. For example data from Monday 9.11.20 show that if as many tests were performed in Slovakia as on Thursday 29.10.20 (when the highest number of positives in the second wave was reached), we should have about 3x times higher number of positives on Monday 9.11.20. cases, i.e. about 3150, instead of 1050.
The cause of the lower number of tests is not known and one of the reasons could be the lack of RT PCR tests or staff in other days. After extensive testing with antigen tests, we had a big problem in Slovakia. This is that so far we have evaluated the situation according to the positivity of PCR tests. However, antigen testing made this situation unclear to us because people tested positive for antigens fell out of the statistics. It should also be borne in mind that lower numbers of positive cases could also be explained by the tightening of epidemiological measures and also because most of the persons with positive antigen tests were quarantined and did not undergo PCR testing.
There is evidence that strategies based on a large number of tests may not produce the expected results. A good example is a comparison of the strategies used by New Zealand and Iceland.1-2 In both of these island countries, the first cases were identified at the end of February 2020, but each country took a different path. New Zealand was one of the few countries that openly announced a COVID-19 elimination strategy right at the beginning of the epidemic. This included a gradually strengthened system for monitoring and isolating contacts with the timely and consistent use of lockdowns and border controls. It should also be recalled that some EU countries, such as Belgium, the Czech Republic, Switzerland, France, Slovenia and the Netherlands, have had a progressive decline in the number of positives, despite the fact that they did not have any comprehensive testing of the entire country.
Unlike New Zealand and many other countries, Iceland's strategy did not include any lockdown period, no official border closure for non-residents and negligible use of quarantine facilities. The cornerstone of Iceland's strategy was easy access to testing and mass screening, along with quarantine and contact tracking. According to data from October 21, New Zealand had 6 times fewer deaths, despite 4.5 times fewer tests than Iceland. Similarly, Slovakia, despite more than 8 times lower number of tests, had half less deaths per million inhabitants than Iceland. It should be recalled that, despite the large number of tests in Iceland, this was not a full-scale test and PCR tests were used. Taken together these findings are further evidence that nationwide antigen testing in a country with low prevalence is ineffective.
References<br /> 1. Jefferies S, French N, Gilkison C. COVID-19 in New Zealand and the impact of the national response: a descriptive epidemiological study. Lancet Public Health. 2020;5:e612-e623
On 2020-12-16 21:08:32, user Dieudonné Balike wrote:
This is a preprint, not yet accepted for publication
On 2021-07-02 14:30:43, user Joe Psotka wrote:
I don't understand why they do not report vaccination rates. Did no one participate in a clinical trial?
On 2020-09-25 02:24:05, user Robert Stephens wrote:
Perhaps it is that young children rarely get lung disease with this virus, possibly on account of having fewer pulmonary ACE2 receptors. <br /> As such, they cannot produce aerosolised virus, hence they struggle to spread virus to the lungs of others. <br /> When young children do manage to infect, it is a "safer" transmission. "Non-aerosol" transmission results in virus deposited in upper respiratory/ oral mucosa, not lungs.
Dr Robert Stephens MB BS FACD
On 2020-09-25 03:59:30, user Eitan wrote:
The results are interesting and promising. However, the fact that they are statistically significant does not mean that they are statistically "strong" as long as the R^2 of the linear regressions is not presented. The plots are very scattered and it seems that the R^2 value is much smaller than 0.5. If this is the case, the readers should be very cautious when drawing consequences. If for example the R^2 is 0.3, the statistical meaning is that only 30% of the variance is significant and can be explained by the vitamin D values. But 70% of the variance is affected by other factors. Can you present the R^2 values?<br /> Thanks
On 2020-09-26 00:39:42, user Robert Stephens wrote:
In hospitalised cases, the viral measurement in the pharynx ("pharyngeal load") perhaps reflects a transfer of virions from the lower respiratory tract /lungs (i.e. "ascended" virions).
In mild, non-hospitalised cases (including children), infection is perhaps localised to the upper respiratory tract. The "pharyngeal load" may be high, but disease is mild as there is no involvement of lungs. <br /> Without lung involvement, there is no aerosolisation of virus, hence infectivity will be low as well, despite the high "load".
Robert Stephens MB BS FACD
On 2020-09-28 06:11:14, user Johann Holzmann wrote:
The study certainly provides an interesting perspective on the dynamics of SARS CoV2 transmission in a heavy dense population and on the infection fatality rate. However, it will be crucial to reproduce the findings in a different chohort using a different test to ensure representativeness. <br /> Takita et al, July 2020 provided seroprevalence data in 2 primary care clinics in Tokyo for which the most common cause for patients visits are respiratory infections. They found an appr 5% seroprevalence in their cohort during the outbreak in March/April and concluded that the number of cases corresponded to the cumulative number of confirmed COVID-19 patients by PCR test reported by the Tokyo Metropolitan Government. <br /> PCR testing capacity in Tokyo was significantly increased to 4000-5000 tests per day resulting in about 300 cases per day on average (~7% positive rate on average). It would be highly interesting to read a discussion on how the determined seroprevalence rate of 46.8% agrees with the number of PCR positive cases in the Tokyo metropolitan area.
On 2020-09-29 08:19:46, user Alan Tomalty wrote:
"Allowing for heterogeneity reduces the estimate of "counterfactual" <br /> deaths that would have occurred if there had been no interventions from <br /> 3.2 million to 262,000, implying that most of the slowing and reversal <br /> of COVID-19 mortality is explained by the build-up of herd immunity."
Since the number of counterfactual deaths (no lockdowns) is still over 2x the expected deaths with lockdowns under the heterogeneity model, I don't understand why you can claim that
" implying that most of the slowing and reversal of COVID-19 mortality is explained by the build-up of herd immunity." RATHER THAN BY LOCKDOWNS. My caps is what you actually meant but didn't say. Or am I misunderstanding what you said and what you meant to say?
On 2020-09-30 18:53:22, user James Rubin wrote:
Please note the authors have identified an issue with the underlying dataset that was analysed for this pre-print. Specifically, around 3% of respondents in the dataset reported having been contacted by NHS contact tracers and asked to quarantine. According to official data from NHS Test and Trace, this should be less than 1%. Given this difference, it is likely that we will revise our interpretation of the quarantine data in the final peer-reviewed paper. Until then, as noted in the manuscript, the data relating to quarantine should be treated with caution.
On 2020-10-02 20:40:57, user Juan Mejía Vilet wrote:
This paper is now available at Salud Publica de Mexico 2020; https://doi.org/10.21149/11684 Link: https://saludpublica.mx/ind...
On 2020-10-05 14:16:29, user Julii Brainard wrote:
Our article got kicked back for flaw= reporting 'suspected' not only confirmed cases in same period. I will be curious how Ian's team's work gets received using same case criteria.
On 2020-10-08 05:54:37, user Gennadi Glinsky wrote:
It will be interesting to see if these initial observations could be confirmed and expanded by the worldwide prospective studies precisely mapping the population-scale levels of pre-existing immune cross-reactivity against SARS-CoV-2 to the clinical course and outcomes of the pandemic.
On 2020-10-09 22:47:46, user BannedbyN4stickingup4Marjolein wrote:
OK so this is a theoretical, mathematical model of the spread of the SARS-Cov-2 virus.
The methodology is explained in the paper. There are several aspects of the spread which the model make no attempt to capture.
-The infection is repeatedly seeded by exogenous inputs from outside the population (except perhaps in New Zealand).
-The mode of infection is not homogenous, subject to susceptibility, but appears to be very random, predicated on super-spreading events from which c. 80% of infections derive, such events affecting the very susceptible and hardly susceptible alike.
These are all fairly important effects on the spread of infection, yet none of them are incorporated in the mathematical model.
Each would have the effect of increasing the effective "herd immunity".
Hence I would be very cautious of using the paper, albeit a valuable academic/theoretical study, to inform public policy. To do so would be precipitous and perhaps the authors should recognise this in their discussion.
On 2020-10-14 05:33:25, user Gennadi Glinsky wrote:
Different interpretation of these analyses suggest that preexisting T cells cross-reactive against SARS-CoV-2 are more likely to affect diseases severity because high levels of pre-existing immunity in uninfected individuals appears associated with lower mortality (https://www.bmj.com/content... ). The significant direct impact on the innate herd immunity against COVID-19 and effect on populations’ susceptibility to the infection seems less likely because no association was observed between levels of preexisting immunity and prevalence of the infection.
On 2020-10-16 12:48:05, user Kirk Schlesinger wrote:
I thank the WHO and all the national health agencies who participated in gathering data for the SOLIDARITY trial.
With a mortality rate above 11% and diabetes incidence of 25%, the test subjects, all hospitalised and over a third on ventilators, were collectively a group with underlying conditions and many were already in an advanced stage of COVID-19.
What I would hope the peer reviewers will help the SOLIDARITY trial authors explore in greater depth is the sub-cohort of patients in the SOLIDARITY trial with less advanced COVID-19: those not yet on oxygen or a ventilator, presenting mild symptoms when entering hospital.
The hypothesis I would like SOLIDARITY and other trials to explore is that treatment with antivirals like Interferon and Remdesivir earlier in the course of disease is more beneficial than when more severe symptoms of COVID-19, particularly inflammation related to immune response, present and respiratory supports are introduced.
There is evidence emerging from other studies that a Complete Blood Count (CBC) test, a conventional and reasonably low-cost test readily available and in use in the USA, can provide an advance indicator of probable severe inflammatory response using the neutrophil-lymphocyte ratio (NLR) produced by the CBC.
If an antiviral such as Remdesivir; Interferon alpha, beta or lambda; Interleukin-6 inhibitor; or possibly even hydroxychloroquine (HCQ) is administered to patients with high NLR prior to presenting severe symptoms, some or all of these treatments may prove significantly more effective than when administered after severe symptoms present.
I hope that the SOLIDARITY data and the data provided in other trials can be parsed and interpreted to test this hypothesis.
On 2020-10-17 19:49:43, user Clayton Bigsby wrote:
The study is 100% inconclusive and warrants further investigation. The clinical trial that it references to: http://www.isrctn.com/ISRCT... second paragraph, line 1, first sentence on "Who can participate?" It says: "Adults (aged over 18 years) hospitalized with definite COVID-19 and not already receiving any of the study drugs." However, it fails to report any additional demographic information about said "adults" and raises more questions about how these results were achieved.
The other clinical trial: https://clinicaltrials.gov/... and the sponsor of that trail was in located at the Institut National de la Santé Et de la Recherche Médicale, France and it too fails to mention the demographic data of the participants. However the Institute for Demographic Studies, abbr. Ined in France has published their results:
https://www.rfi.fr/en/franc...
"For example, of the 3,523 deaths due to Covid-19 recorded in France on Tuesday evening, “84 percent of deaths are people over 70,” Robine says, adding 19 percent are over 90.
Although younger people come down with serious enough cases to be admitted to ICUs, data show they are far likelier to make a recovery. Less than 2 percent of deaths in France have been patients under age 50."
Mixing the deaths from +80 year olds with pre-existing conditions, with -30 year olds just to claim that a potential treatment doesn't work is pretty messed up and deceptive.
On 2020-10-26 09:27:03, user Leaf Expert wrote:
Great research! The FDA reported that it completely endorsed the utilization of remdesivir as a treatment for COVID-19 requiring hospitalization in all grown-up and some pediatric patients.
Remdesivir is just to be regulated in a clinic or medical care setting fit for giving intense consideration similar to inpatient emergency clinic care. The medication, likewise alluded to by the FDA as Veklury, is the main treatment for COVID-19 to get FDA endorsement, as per a FDA news discharge. It tends to be utilized for grown-up patients and pediatric patiens who are more than 12 years of age and gauge in excess of 40 kg (88 lb).
The medication was as of late in the news after it was reported that it was among the medicines given to President Donald Trump during his session with COVID-19.
On 2020-10-17 11:52:14, user fvtomasch wrote:
I would add not only low sodium levels but low magnesium/zinc/potassium/B12/Vitamin D/and many essential nutrients depleted by medications people take for comorbidities of hypertension and diabetes like Metformin/HCTZ/PPI's which deplete these levels over time and basically opens Pandora's Box to having a more severe case of Covid and other diseases rather than having a mild or asymptomatic case if having proper or optimum nutrient levels. We are a over medicated society plain and simple. A pill for everything except good health.
On 2020-10-19 16:19:34, user Angry Cardiologist wrote:
This is an interesting set of observations among a self-selected series of patients who are suffering chronically following respiratory infection that may or may not have been from SARS-CoV2. As the paper explains, 73 of the 201 subjects did not have PCR or antibody confirmation of SARS-CoV2 infection. For a paper purporting to describe a syndrome that is specific to “long COVID” it behoves them to be more selective in their inclusion.
My next criticisms will focus on the heart, as this is my are of specialty.
1) The identification of “borderline” LVEF in MR was based on *echo* derived data in the Framingham Heart Study, per their citation (S6). The mean age of this cohort of 363 individuals was 57 years old (±SD 13y). This contrasts greatly with this population of mean age of 44 years, ± SD 11.0 years. Wrong measurement. Wrong population.
2) For their determination of myocarditis by MR, they say “T1 is a field-strength specific parameter in line with study-specific. Thresholds based on healthy controls in the same setting n=5.” This is not based on any published data. Only 5 controls (not otherwise described) is quite flimsy to base normal values.
For this paper to be taken seriously, they need to address these very obvious weaknesses in subject selection and cardiac image analysis. I will leave further analyses of other areas to their respective experts.
On 2020-10-22 03:41:09, user koch wrote:
I read your observational study with interest but have some questions about the methods portion. It seemed that exposure / treatment with digoxin was determined by the presence/ absence on the discharge medication list. There is also mention of 2 scripted phone interviews with patients and relatives. What was the role, content, scope, and timing of these interviews ? Were the interviewers “masked “ in terms of awareness of who was exposed/ treated with digoxin? If a patient was discharged on digoxin but stopped before the first interview , how were they categorized ? Conversely if a patient was not discharged on digoxin but was started on it before the first or second interview , how were they categorized ?
On 2020-10-23 05:06:05, user Robert Clark wrote:
This is another paper where positive effects of HCQ are left out of the conclusions the paper reports. In the Table 2, the line for mortality at 28 days shows a cut by a factor of 0.54 on HCQ. The difference is not at the standard 0.05 significance level, with a p-value of 0.22. However this does not mean the result is false. It could just as well be the sample size is not large enough for the significance to reach the 0.05 level.
Too often this is overlooked in medical studies. For instance a significance level of 0.05 means there is 5% chance that the difference is just by chance. Or said another way there is a 95% chance that the difference is not by chance alone, meaning the difference is a real effect.
But by the same token a statistical significance of 0.22, i.e., the p-value being 0.22, means there is a 78% chance that it is a real effect. In other words in probability terms it’s more likely than not to be a real effect.<br /> {There are several online calculators of, for example, the Fishers Exact test of statistical significance, such as here: https://www.graphpad.com/qu...}
Yet, often when a result does not reach the 0.05 significance level, it is common, and mistakenly, reported as the result being proven wrong.
In this regard it must be remembered that these calculated levels of statistical significance are dependent on the sample size. For instance with the mortality rates for the HCQ and non-HCQ cases the very same as in this study but at a large enough sample size the statistical significance could be at the 0.05 level. This is especially important in a study such as this one where The originally planned on number of subjects had to be greatly reduced because of a reduced number of cases of the illness.
Another aspect of this Table 2 becomes apparent from unwrapping the data. The study uses what is called a “composite endpoint”, or “composite outcome”. This means two subcases are combined into one. In this study, the cases of “invasively mechanically ventilated”, i.e., intubated, and “deaths” are combined, called the “Primary outcome” in the Table 2.
But the number of deaths specifically on invasive mechanical ventilation is an important number to find out. This is because the mortality rates for that category have been so high. So, the RECOVERY trial for example counted it as a breakthrough when dexamethasone cut deaths in that category by 30%.
In this study, the “Primary outcome” is the union of the two sets, “invasively mechanically ventilated” and “deaths”. What we want though is the number of those ventilated patients who died, the intersection of the two sets.
Use the formula |A ? B| = |A| + |B| – |A ? B|, which simply means the number in the union is found by adding the numbers in the two sets minus the number in the overlap.
We want the number in the intersection though so we’ll turn it around to get:
|A ? B| = |A| + |B| – |A ? B|
For HCQ:<br /> |ventilated?deaths| = |ventilated| + |deaths| – |ventilated?deaths| = 3 + 6 – 9 = 0. So 0 deaths out of 3 patients on invasive ventilation on HCQ.
But for non-HCQ:<br /> |ventilated?deaths| = 4 + 11 – 12 = 3, so the number of deaths on invasive ventilation not taking HCQ was 3 out of 4.
The numbers are too small to draw firm conclusions though. It is unfortunate that the study could not be completed with the originally planned number of cases.
One last fact left out of the conclusions of the paper that supports benefits of HCQ:
Figure 2. Analysis of outcomes in predefined subgroups.<br /> For analysis of the primary outcome in the subgroup of patients receiving azithromycin at randomization, the relative risk could not be calculated because the primary endpoint occurred in 0 of 10 patients who received both azithromycin and hydroxychloroquine compared to 3 of<br /> 11 patients who received azithromycin and the placebo.
???????
Robert Clark
On 2020-10-28 13:08:33, user juanpa wrote:
I completely agree with what you say about the unintended? "Forgetting" the huge difference in deaths at 4 weeks.
I also agree on the meaning of the "p".
To all this should be added another "forgetfulness": that the percentage of intubations and mechanical ventilations in the active group are 40.9% lower than placebo group in the same time period (2.4 vs 3.3%),
The number of deaths specifically on invasive mechanical ventilation is new to me.
In my opinion there are thre more criticisms to add
1st.- If the study designers wanted to verify the degree of effectiveness of the Raoult method scientifically, they only had to clone it. It is evident that this was never his intention (there is no AZ or Zn, neither the doses nor the timing are the same, the treatment was not started early enough either, ...)
2nd.- The funding agencies should never have financed it until the treatment designed cloned the Marseillais.
3º.- For me, the reasons for the premature suspension of the study were never too clear. Did they think there would not be a 2nd wave? Couldn't they wait for her?<br /> Someone might suspect that the preliminary results were too flattering for the HDQ and that the results should be prevented at all costs from being statistically more significant.
Sorry for my bad english
On 2020-10-23 15:19:56, user Nicole M. Bouvier wrote:
This preprint is now published in revised form, including new data: <br /> Convalescent plasma treatment of severe COVID-19: a propensity score–matched control study<br /> Sean T. H. Liu, Hung-Mo Lin, Ian Baine, ..., Judith A. Aberg, Nicole M. Bouvier <br /> Nat Med 2020, DOI https://doi.org/10.1038/s41591-020-1088-9
On 2020-10-23 20:18:59, user María José wrote:
I do believe that this article is so interesting, as it combines the biological and clinical basis in once article. I just want to say congratulate them. On the other hand, I have some questions about your article, the first one, why didn't you include Anexina V? The second, the final part of the protocol why was it not controlled and why the sample size wasn´t bigger?
On 2020-10-24 02:52:00, user CDSL wrote:
Dear Authors,
I enjoyed reading about this research, and I think you all do a great job of providing logical explanations for the data you collected. However, one major question that remains with me after reading this paper is, what is the novelty of this study? There is a lot of reference in both the introduction and discussion sections about previous studies that align or do not with the results of this study, and it seems that the data being collected here is just another study on the same correlation between these cytokines and MDD. I think a direct reference to the novelty of this information in the abstract, discussion, and conclusion will help solidify the data being collected. Additionally, how did you all reach the conclusion regarding females exhibiting greater serum cytokine levels compared with males at higher Ham-D scores? The visual data does not seem to conclusively provide this conclusion, so I think in the future it would be beneficial to elaborate on the actual statistical analysis being used to get this conclusion and provide an explanation in the discussion for why females would potentially have higher cytokine levels.
On 2020-10-24 23:37:11, user Nando wrote:
Out of 110 cases, 27 created secondary exposures - of which 23 were in closed environment.
Conversely 71 cases were in closed environment and did not generate a secondary exposure.
As is, the data presented is statistically insignificant... it does not prove that closed environments increase the risk of COVID Exposure.
On 2020-10-27 02:05:30, user Critical Dissection wrote:
Dear author,
I enjoyed reading the article and I liked how the abstract was divided and broken down to introduction, methods, conclusion and results. I think that really helped me get an idea of what I will be reading. The methods section was detailed which was good. However, I had some difficulty and confusion when reading the paper. I thought the figures could be explained better because I had confusion dissecting them. Some issues with the methods were the reduced sample of the study and the lack of long-term follow up for atrial flutter relapse.
On 2020-10-28 17:44:49, user Andrea Camperio wrote:
Here you finally can find our research on the covid pandemic revealing that the first wave which was early modeled by Giordano et al., .(2020 reference in the text),and influenced the Italian government decision toward lockdown, using SIDDARTHE algorithm, was dramatically worse than what actually happened.
This suggests that there is good hope against the pessimistic perspectives of this second wave will be disattended as well. We are still actively developing new strategies to counteract virus effects. We are in the brink of implementing vaccination, new medicines are becoming available, older ones have been rehabilitated, so there is good hope for winning new battles to defeat the virus.
My personal predictions are that when the whole country, 60 million people, will have been infected, 2% of them will need special and intensive care( data on present fraction of infected needing intensive care about 2%), that means 1.2 million people, if intensive care will be available and sufficient, only 2% or less will die (data on at present survival rate in intensive care with Sars-Covid19), that means around 12,000 people, once everyone is (and if) exposed to the infection. However, if the intensive care, wont be sufficient, then at present 40% of the worst cases (1.2 millions) will risk their lives, without intensive care support, equal to about 400,000 people.
All depends on the evolution of the virus. The virus is evolving, in two directions, as all other aereal virus that affect humans, such as flu. First direction, that we already have seen, the virus is evolving toward being less and less lethal, because harming the host means extinguishing its self as well. The second direction, however, is more dangerous, and it is going toward faster and faster diffusion in new human hosts.
All human flu are very fast spreading, within 3-5 month they affect a very large portion of the population (30-40%), and very low mortality, usually 8.000 to 12.000 lives every year, mostly old and fragile individuals (about 0.0002 % of infected individuals). The Sars-Covid19 virus at present is killing at 0,1 % without the support of intensive care, and 0,002 with the supplement of intensive care, which means that is between 1000 and 10 times more lethal than a normal flu. In other words, if the virus will affect the whole population, in ten years or more (very improbably slow) the rate of people in intensive care will be below 10.000 per month, and affordable by our present health system. On the other extreeme, if the virus will spread to the whole population in just one year (extremely fast given the present rate), there will be ten times more people needing intensive care that the ones available, which will mean, not more, but around, 400.000 people at risk of failing with present rates.
Hence my personal prediction is that this pandemic in Italy, will take between less 10,000 and 400,000 lives more, before transforming in a normal human flu, depending on virus evolution regarding the speed of infection and the decrease of lethality.
On 2020-10-28 19:15:20, user Dhurgham al-karawi wrote:
This paper is been published at World Academy of Science, Engineering and Technology<br /> International Journal of Computer and Information Engineering<br /> Vol:14, No:10, 2020 with new title ( Artificial Intelligence-Based Chest X-Ray Test of COVID-19 Patients )
On 2020-10-29 06:19:38, user Marm Kilpatrick wrote:
This is a very nice study. Unfortunately, two pieces of information are missing that make it very difficult to build on this study or compare it to the vast data on viral loads over time that are available from other studies:
1) the date of symptom onset for the 13 symptomatic patients. Can you indicate this date of symptom onset on the figure with the individual viral loads (Supp Fig 13)?
2) a conversion of viral loads from Ct values into copies per swab. This could be done either by re-running the samples with standards on the plate, or by simply running some standards with known copies. I am aware that this relationship (Ct-viral copies) can vary from machine to machine and even a little from run to run on the same machine, but without this conversion the Ct values in this study can't be compared to other studies that used different assays, machines, etc. Given that you were willing to use Ct scores from the Florida labs in your analysis (with the relationship in Figure S5) it seems like it would be possible to run a few standards and at least get an estimate of what viral loads you observed in copies/swab.
Adding these two aspects to your paper would greatly enhance its value for the broader scientific community.
A third component which may be much more difficult for most samples, but might be possible would be to indicate the likely day of infection if this can be inferred from case investigation. This would allow the data to be even more informative in mapping the relationship of viral load back to the day of infection.
Thank you,<br /> Marm Kilpatrick
On 2020-10-30 09:59:29, user RS wrote:
This is an interesting paper. I recognise the caveats relating to correlations which the authors acknowledge. I am however confused. Given the results found:<br /> 'There were similar incidence rates among SAH + MFM states (95% CI, 1.19% to 1.64%. n=34), SAH + no-MFM states (95% CI, 1.26% to 2.36%. n=9) and no-SAH + no-MFM (95% CI, 1.08% to 1.63%. n=7). However, SAH+MFM states (n=34), SAH+no-MFM states (n=9) had significantly higher averages in daily new cases and daily fatality, case-fatality-ratio (CFR) and mortality rate (per 100,000 residents) than no-SAH+no-MFM states during pandemic periods (about 171 days), respectively. ' how can the authors conclude that <br /> 'This study provided direct evidence of a potential decreased in testing positivity rates, and a decreased fatality to save life when normalized by population density through strategies of SAH + MFM order' I have looked at the paper and I can find no evidence for that conclusion. (Normalising with regard to population density found no difference). Indeed the authors state that "Furthermore, dismissing a low-cost intervention such as mass masking as ineffective because there is no evidence of effectiveness in clinical trials, is potentially harmful.' Surely non-pharmaceutical interventions such as masking should be evidence based, as the tragedy of the advice of mothers to lay their new borns on their stomachs showed.<br /> I would appreciate clarification, thanks.
On 2020-10-30 14:16:43, user Michael Ford wrote:
Still have to wait for peer review to see how much weight this should carry. This is just a pre-print article.
On 2020-11-06 10:28:28, user kdrl nakle wrote:
This is like saying that eating in rooms with more than two tables is associated with obesity. Absurd.
On 2020-11-08 03:03:45, user perrottk wrote:
Comments on “A Benchmark Dose Analysis for Maternal Pregnancy Urine-Fluoride and IQ in Children”<br /> I question the validity of attempting to determine a BMC for the effect of fluoride intake on IQ without first ascertaining if there is a real effect. The problem of this document is that it assumes an effect without making a proper critical assessment of the evidence for a causal effect.<br /> The draft paper relies completely on two studies which reported very weak relationships from exploratory analyses. Nothing wrong with doing exploratory analyses – providing their limitations are accepted. Such analyses can indicate possibilities for future studies testing possibly causes – but, in themselves, they are not evidence of causation. These studies provide no evidence of causal effect<br /> The studies this draft relies as evidence that fluoride causes a lowering of child IQ illustrates have the following problems.<br /> 1: Correlation is not evidence of causation – no matter how good the statistical relationship. And reliance on p-values is not a reliable indicator of the strength of a relationship anyway The two studies relied on here do not report the full results of statical analyses which would have revealed the weaknesses of the relationships.<br /> 2: These two studies were exploratory – using existing data. They were not experiments specifically designed to establish a cause.<br /> 3: Many other factors besides those investigated can obviously be important in exploratory studies where there is no control of population selection. While authors may claim confounders are considered it is impossible to do this completely – there are so many possible factors to consider. Most are not included in the datasets used and the researchers may make their own selection, anyway.<br /> The study of Malin & Till (2015), referred to in this draft, illustrates the problems. Malin & Till (2015) reported what they considered reasonably strong relationships (p-values below 0.05 and R squared values of 0.21 to 0.34 indicating their relationships explained 21% to 34% of the variance in ADHD prevalence). However, their consideration of possible other risk-modifying factors was limited. They did not include state elevation which Huber et al (2015) showed was correlated with fluoridation. The strength of Huber’s relationship (R squared 0.31 indicating elevation explained 31% of the variance in ADHD prevalence) was similar to that reported by Malin & Till for fluoridation.<br /> Perrott (2018) showed that when elevation is included in the statistical analysis the relationship of ADHD prevalence with fluoridation was non-significant (p>0.05). This show the danger of relying on the results of statistical relationships from exploratory studies where consideration of other possible risk-modifying factors is limited.<br /> 4: This draft paper relies on the reported links between cognitive factors and F intake without testing for a causal effect. But it also does not critically assess those correlations. The problems of confounders have already been mentioned but these two studies report very weak relationships or, in most cases, no statistically significant relationships.<br /> For example, of the 10 relationships between measures of fluoride exposure and cognitive effects Green et al (2019) reported that only 4 were statistically significant (Perrott 2020). That is not evidence of a strong relationship and underlines the danger of assuming correlations (especially selected correlations) are evidence of causation. Incidentally, this draft paper mentions the study of Till et al (202) which also reported relationships between fluoride exposure with bottle-fed infants and later cognitive effects. In this case only three of the 12 relationships reported were statistically significant (Perrott 2020).<br /> Even those relationship reported as significant were still very weak. For example Green et al (2015) reported a relationship for boys which explained less than 5% of the variance of IQ measures.
The relationships reported by Bashash et al (2017) were also extremely weak – explaining only about 3.6% of the variance in IQ and 3.3% of the variance in GCI. This weakness is underlined by other reports of relationships found for the Mexican ELEMENT database. Thomas (2014) did not find a significant relationship of MDI with maternal urinary fluoride for children of ages 1 to 3 although in a conference poster paper Thomas et al (2018) reported a statistically significant relationship for urinary fluoride adjusted using creatinine concentrations.<br /> 5: As well as ignoring the incidence of non-significant relationships from these studies this draft paper also ignores the findings of positive relationships from other studies. For example, Santa-Marina et al (2019) reported a positive relationship between F intake indicated by maternal urinary F and child cognitive measures. Thomas (2014) also reported a positive relationship of child IQ (MDI for 6 – 15-year-old boys) with child urinary fluoride.<br /> 6: The draft paper describes the two studies it uses for its analysis as “robust” but ignores the fact that the findings in these and other relevant studies are contradictory. For example, the findings reported in the two papers differ in that Bashash et al (2017) did not report different effects for boys and girls whereas Green et al (2019) did. Santa-Marina et al (2019) reported opposite effect to those of Bashash et al (2017) and Green et al (2019). These contradictory findings, together with the lack of statistical significance for most of the relationships investigated, are perhaps what we should expect from relationships which are as weak as these are.<br /> Summary<br /> The paper relies on weak relationships from exploratory studies. Such relationships, even where strong, cannot be used as evidence for causation and to assume so can be misleading. BMCs and similar functions derived without any evidence of real effects are not justified. While the derived BMCs may be used by activists campaigning against community water fluoride, they will be misleading for policy makers. This sort of determination of BMC is a least premature and a worst meaningless.<br /> References:<br /> Bashash, M., Thomas, D., Hu, H., Martinez-mier, E. A., Sanchez, B. N., Basu, N., Peterson, K. E., Ettinger, A. S., Wright, R., Zhang, Z., Liu, Y., Schnaas, L., Mercado-garcía, A., Téllez-rojo, M. M., & Hernández-avila, M. (2017). Prenatal Fluoride Exposure and Cognitive Outcomes in Children at 4 and 6 – 12 Years of Age in Mexico. Enviromental Health Perspectives, 125(9).<br /> Green, R., Lanphear, B., Hornung, R., Flora, D., Martinez-Mier, E. A., Neufeld, R., Ayotte, P., Muckle, G., & Till, C. (2019). Association Between Maternal Fluoride Exposure During Pregnancy and IQ Scores in Offspring in Canada. JAMA Pediatrics, 1–9.<br /> Huber, R. S., Kim, T.-S., Kim, N., Kuykendall, M. D., Sherwood, S. N., Renshaw, P. F., & Kondo, D. G. (2015). Association Between Altitude and Regional Variation of ADHD in Youth. Journal of Attention Disorders.<br /> Malin, A. J., & Till, C. (2015). Exposure to fluoridated water and attention deficit hyperactivity disorder prevalence among children and adolescents in the United States: an ecological association. Environmental Health, 14(1), 17.<br /> Perrott, K. W. (2018). Fluoridation and attention deficit hyperactivity disorder a critique of Malin and Till (2015). British Dental Journal, 223(11), 819–822.<br /> Perrott, K. W. (2020). Health effects of fluoridation on IQ are unproven. New Zealand Medical Journal, 133(1522), 177–179.<br /> Santa-Marina, L., Jimenez-Zabala, A., Molinuevo, A., Lopez-Espinosa, M., Villanueva, C., Riano, I., Ballester, F., Sunyer, J., Tardon, A., & Ibarluzea, J. (2019). Fluorinated water consumption in pregnancy and neuropsychological development of children at 14 months and 4 years of age. Environmental Epidemiology, 3. <br /> Thomas, D. B. (2014). Fluoride exposure during pregnancy and its effects on childhood neurobehavior: a study among mother-child pairs from Mexico City, Mexico [University of Michigan].<br /> Thomas, D., Sanchez, B., Peterson, K., Basu, N., Angeles Martinez-Mier, E., Mercado-Garcia, A., Hernandez-Avila, M., Till, C., Bashash, M., Hu, H., & Tellez-Rojo, M. M. (2018). OP V – 2 Prenatal fluoride exposure and neurobehavior among children 1–3 years of age in mexico. Environmental Contaminants and Children’s Health, 75(Suppl 1), A10.1-A10.<br /> Till, C., Green, R., Flora, D., Hornung, R., Martinez-mier, E. A., Blazer, M., Farmus, L., Ayotte, P., Muckle, G., & Lanphear, B. (2020). Fluoride exposure from infant formula and child IQ in a Canadian birth cohort. Environment International, 134(September 2019), 105315.
On 2020-11-09 17:52:45, user Elena Criscuolo wrote:
The manuscript has been published on 16th October on Journal of Medical Virology as:<br /> "Weak correlation between antibody titers and neutralizing activity in sera from SARS-CoV-2 infected subjects"<br /> https://onlinelibrary.wiley...
On 2020-11-11 19:43:04, user Dr. Amy wrote:
"1081 patients with a diagnosis of COVID-19 were admitted between May 5 and July 31, 2020 in our hospital. 793 patients had mild disease. 545 patients received steroids, and 125 patients received TCZ along with steroids for treatment. We did not have any control group as TCZ was available in our hospital and was a part of the treatment protocol since we started treating COVID-19 patients." I'm a bit confused as to why you can't use some of the 956 patients who didn't get TCZ as controls? Since patients on room air did receive TCZ, surely there are patients at all levels of severity who could serve as a control group to demonstrate that early course TCZ matters?
On 2020-11-18 19:54:47, user Donald R. Forsdyke wrote:
RISK ALLELES FAVOUR POSITIVE SELECTION OF CELLS POISED FOR "NEAR-SELF" REACTIVITY
The hypervariable CDR3 regions of T cell receptors (TCRs) show specificity for peptides (p) that can associate with individual-specific sets of MHC (HLA) proteins. Different individuals inherit different sets of MHC genes (polymorphism). T cells defend against pathogens by recognizing pathogen-derived peptides complexed with MHC proteins (pMHC). However, T cells can also cause autoimmune disease by reacting with an individual's own peptides complexed with MHC proteins. Thus, there are "inter-individual differences in autoimmune disease risk," and "CDR3 patterns associated with autoimmune disease risks might indicate T cell reactivity to pathogenic antigens." Indeed, vulnerability to autoimmune disease is strongly correlated with inheritance of certain MHC sets ("risk alleles").
From a statistical study of pMHC-TCR sequence covariance in human populations, the authors conclude that "MHC risk polymorphisms modulate the process of thymic selection and give rise to TCR repertoires that may be poised for autoreactivity." However, they also state that “T cells that cannot generate substantial TCR signaling from any HLA-peptide complex die by neglect (positive selection).” This implies that death by neglect equates with positive selection.
In the 1970s it was proposed that, anticipating a pathogen strategy of exploiting "holes" in the T cell repertoire that had been created by negative selection of freshly arising anti-self T cells, future hosts would, though positive selection, naturally establish repertoires poised for autoreactivity. Thus, following positive selection, peripheral T cells recognize, and are maintained through tonic-stimulation by, "near-self" antigens. Individuals inheriting MHC risk alleles equilibrate nearer to the perilous anti-self "brink" than individuals inheriting non-risk alleles.
The wealth of fresh evidence on this, as provided by the authors, is interpreted as favouring the “central [thymic] hypothesis.” However, they agree that the “central hypothesis” and the “peripheral hypothesis” are non-exclusive. Indeed, their results provide important evidence supporting a combined central-peripheral hypothesis. This has recently been summarized (Forsdyke DR. Scand J Immunol. 2019; e12746).
On 2021-09-11 13:44:08, user Irl Smith wrote:
Arola et al. show that the incidence of myocarditis is in the vicinity of 140 per year per million boys aged 15 (in girls, and other boys, the incidence is roughly an order of magnitude smaller). By neglecting the prior probability of myocarditis in all persons, not just those being vaccinated, the authors render their conclusions completely untenable. In other words, while the risk of hospitalization from COVID in boys is arguably smaller than the risk from myocarditis, there is no evidence that vaccination status affects the myocarditis risk.
On 2021-09-12 01:57:32, user Swapnil Hiremath wrote:
The authors have undertaken an ambitious project: briefly, taking numerators from the VAERS database, denominators from vaccine numbers from elsewhere. They then perform a ‘harm-benefit’ analysis looking at COVID hospitalization as the only harm. The whole analysis is restricted to the 12-17 age group in whom the concern of myocarditis is admittedly higher. <br /> They report a risk which was anywhere from 1.5 to 6.1 times higher for vaccine associated myocarditis vs COVID causing hospitalization. Vaccines must be bad, surely.
However, several problems are quickly apparent. <br /> 1. The rate of myocarditis is much higher than the ones reported in Ontario: 160/million for 12-15 males compared to 72.5/million from Ontario (which includes Moderna as well - which has higher rates of myocarditis than the Pfizer/BioNTech). Why would this be so? There are many possible reasons, including the overestimation from VAERS being probable cause. On a perusal of the supplement, there are many which are other viral diseases which could be the reason; additionally many descriptions are quite vague (‘the doctor told us troponin was elevated’). It is very easy to submit cases to VAERS, so the numbers reported by the authors seem to be higher than the true value. The case ascertainment performed in Ontario seems a bit more reliable and trustworthy than user entered data in VAERS.
It was not clear why the authors chose Jan 1, when vaccines EUA for 16-17 started in March, and for 12-15 in May. In their database, there seems to be one case in March and most of the VAERS reports from May or later.
Secondly, the authors make many assumptions when it comes to who had comorbidities and who did not among the children, and multiply numbers to come up with some crude estimates. It would be useful for a pediatric diseases researcher to assess these assumptions. The 40% assumption of children hospitalized 'with COVID' and not due to COVID is a very crude untruth that the authors and others have needlessly perpetuated on social media with little foundation.
Most importantly, the authors assume that hospitalization is the only bad thing for children who develop COVID. 12-17 years olds have died due to COVID. Some developed MIS-C. Some developed longer term sequelae. To group them under ‘hospitalization’ seems overly simplistic. Similarly, from perusing some of the vaccine-myocarditis, many seem to have recovered with symptomatic care. The authors seem to be minimizing COVID and maximizing vaccine associated adverse events.
It should be noted that the involvement of children in the first two waves seems to be different than the one we have seen in the last 2 months with delta (for whatever reason - perhaps with lower immunization numbers in these).
Lastly, the pandemic is not yet done. Many more children are going to get COVID in the next few months and years. We are going to have many more hospitalization, morbidity and sadly many more deaths. There will be long term morbidity and sequalae. We do need better data to assess the risks and benefits. This study is not it.
On 2021-09-13 10:33:24, user Max Sargeson wrote:
This is a useful study in terms of demonstrating the risks but tells us little about the causal etiology of post-vaccinal myocarditis.
Until recently I'd assumed it was coagulopathy related i.e. due to tiny clots or fibrin deposits in the myocardium. Others have suggested that these intramuscular mRNA injections result in the lipid nanoparticles used for delivery being pinocytosed by skeletal muscle cells - which would only be infected in the case of the most advanced and unmanaged Covid cases, with significant viremia - and subsequently the unusual presentation of the spike protein antigens on muscle cells (rather than epithelial pneumocytes) thus promoting T-cell meditated autoimmunity against cardiac muscle.
Are the markedly elevated troponin levels of affected boys compared to girls in the 12-15 age bracket (5.2 vs 0.8 ng/ml median) after the first dose evidence for one scenario over the other? I would appreciate if someone knowledgeable in immunology could offer comment, in the unlikely case that they see this.
On 2021-09-15 06:06:42, user Jakob Heitz wrote:
Why is the time frame for Covid hospitalizations of 120 days chosen and then compared with CAE events due to vaccination? Is it assumed that an individual will get vaccinated every 120 days?
On 2021-09-15 06:28:09, user Jakob Heitz wrote:
You equate a hospitalization due to Covid with hospitalization due to CAE from vaccination. Is it possible that the hospitalization due to CAE from vaccination was only for observation and only one day in length? Is it possible that a hospitalization due to Covid was due to serious illness?
On 2021-09-10 09:53:32, user Christopher McMaster wrote:
This paper uses an interpretation of VAERS data that is not supported by the literature or HHS. I suggest the authors familiarise themselves with the VAERS guidance (https://vaers.hhs.gov/data/... "https://vaers.hhs.gov/data/dataguide.html)") and avail themselves of the many publicly available educational materials on pharmacovigilance research (e.g. Uppsala Monitoring Centre educational materials).
On 2021-09-11 19:31:00, user Rikk wrote:
CDC has done a good job in pinpointing covid risk factors. Age and high BMI stood out. Other studies have confirmed vitamin D deficiency to increase severity of disease. We can obviously ignore age in a study of children. But why is correlation to BMI not included? It is easy to obtain. Vitamin D status should be included where available.<br /> Such a high level view becomes crude as the individual variation of risk factors has a major impact typically. Only with refinement of data can good conclusion be made, I think the work should strive to use the CDC defined risk factors as much as possible and as an overlay to analyse the risk of myocarditis for each CDC defined co-morbidity. Especially if the study has an intent to be a guide for any sort of intervention.
On 2021-09-15 14:55:56, user Geoff Bridges wrote:
There are many, many different types of PCR tests all of which are very accurate at detecting SARS-CoV-2 even the original Drosten et al test was quite accurate but has since been improved.<br /> The problem is that governments haven't requested the amplification or CT rate of all positive tests so we don't know whether the person tested is infectious or not. A low CT rate up to 20 is probably infectious, a mid CT rate of around 25 is possibly infectious and a high CT rate 25 to 45 probably not infectious.<br /> A study in the US suggested that 85% to 90% of positive cases are not infectious.<br /> https://www.nytimes.com/2020/08/29/health/coronavirus-testing.html<br /> A further PCR or LF test should be done a day later after self isolating on the 25 to 45 group to ascertain the trajectory of the virus in the person to see if they are coming OUT of an infection and therefore not infectious or going IN to an infection and therefore infectious.<br /> It is the lack of CT rate information which is causing the "casedemic" and NOT a fault with the PCR tests per se.<br /> The ONS Dataset, Coronavirus (COVID-19) Infection Survey: technical data, https://www.ons.gov.uk/peoplepopulationandcommunity/healthandsocialcare/conditionsanddiseases/datasets/covid19infectionsurveytechnicaldata shows the CT rate for a random number of people around the UK. If they set the upper limit of an infection at 25 then approximately 60% of cases would potentially not be infectious which could be confirmed with a Lateral Flow test 24 hours later. <br /> This would give an accurate view of how many people are actually “infectious” whilst giving those with positive test results but “not infectious” to carry on with their employment and lives and would avoid the problems with a “casedemic”.
On 2021-09-16 03:49:51, user Truther wrote:
They only used the PCR test to determine previous infection and it seems the re-infection rate is lower than the false positive rate on the PCR test so the question is has that been addressed in the quoted error?
On 2021-09-04 19:09:42, user Ben Veal wrote:
As a qualified statistician who's been doing this stuff for over 20 years, and has worked on several medical studies I think I ought to add my voice to the crowd.<br /> There may be a few things that aren't fully accounted for such as the false positive rate for PCR tests, or unbalanced populations due to deaths of highly vulnerable members of the pre-infected group, but they should not alter the conclusions much. As mentioned by others the false positive rate for PCR tests would have the effect of biasing the risk ratio downwards, not upwards, so we should expect the effect to be even stronger than reported.
As for the potential drop-out issue due to deaths of highly vulnerable people among the pre-infected group; this would only be a problem if there are some unaccounted for cofactors causing that high vulnerability. If this is the case then we can approximately correct for the imbalance by estimating the number of deaths in the pre-infected group based on the known infected mortality rate. <br /> I have done that calculation (see link below), and get a lower bound estimate for the 95% confidence interval of [4.3,11.23] which is still significant.<br /> However, it could make a big difference to the risk of hospitalization (again assuming there are some important cofactors unaccounted for).<br /> https://www.facebook.com/ec...
Another criticism I have read in these comments is that they should have used a conditional model (https://en.wikipedia.org/wi... "https://en.wikipedia.org/wiki/Conditional_logistic_regression)") to account for the matching. Actually a conditional model is used when there is unequal distribution of the treatment groups (pre-infected & vaccinated) within each strata (age, gender, socio-economic status & geographic region), and you are unable to use covariates to control for this. But the matching that they did ensures that this isn't the case. Furthermore they control for all but one of the strata (geographic region) with covariates.
So, overall I trust the overall conclusion; natural immunity from pre-infection is better than vaccination, but not as good as natural immunity + vaccination.
This does not mean governments should put a halt to their vaccination programs since that's obviously going to result in more deaths among the vulnerable, but perhaps it might be wise to reduce the vaccination rate among the less vulnerable people (i.e. young healthy people) so that they can build up natural immunity and be better prepared to fend off new variants from spreading through the population. In fact it ought to now be possible to estimate the optimal proportions of vaccinated & unvaccinated that would result in the lowest risk of contagion spread, given that we can expect to see this virus reappearing every year.
On 2021-10-01 23:15:34, user Matthew Elvey wrote:
Sent to c19early.com: Please consider adding [link to this page] (saline -NeilMed/Navage).
On 2021-10-04 14:53:53, user Mark Purtle wrote:
This article only addresses one part of induced immunity ie antibodies. No mention of cell-mediated immunity and that may be as important or more so long term. Not enough research in that aspect of immunity thus far.
On 2021-10-04 18:45:17, user MUltan wrote:
A maternal cognitive ability measure would have been a much better predictor of infant cognitive ability than educational level, which is a rather poor proxy for mental ability. Having such a measure for both parents would be even better. Virtually all those with educational levels above high school should have some standardized test scores that would give a fair indication of mental ability. With the clinical setting, it shouldn't be at all hard to get at least a Wonderlic or other brief cognitive ability measure for nearly all the mothers, which would be vastly better data.
The maternal stress questionaire is also a very indirect measure -- asking about time spent interacting with infants and the character of those interactions would have been much more informative -- though these questions were not asked of the cohort prior to the pandemic, so the data would be hard to interpret, I suppose.
The paper seems to be trying to find a social environmental cause and neglecting the possibility that the mental performance decline could be due to an environmental toxin. The CV spike protein is by far the most plausible candidate for such a toxin, nothing else is sufficiently new and widespread to have such an effect size. From summaries of research I gather that the spike protein can be: toxic to blood vessel linings, cause clotting disorders, strokes, and low blood oxygen; can cross the blood-brain barrier and the placenta; is expressed in breast milk; and can sometimes cause various pathological immune reactions, including neurological damage in some cases. The spike protein levels will have been by far highest among vaccinated mothers, so comparing the mental performance of a cohort of infants who were gestating or breastfeeding when their mothers received an mRNA CV vaccine to a contemporaneous cohort of infants whose mothers were not CV-vaccinated (and preferably uninfected as determined by antibody testing) should clearly resolve whether the CV spike protein itself is the culprit for lower infant mental performance, or rather other, primarily social factors.
On 2021-08-31 22:18:33, user Timmy Tester wrote:
Why would you use hypothetical modeling data <br /> to predict results? We have kids in school now, some with mask mandates some without. In the US, Europe and beyond? Why wouldn’t you look at real world data on actual kids in school? If you create a model that shows more covid spread with no masks, the result is kind of inevitable and not very scientifically valid.
On 2021-09-03 18:55:26, user Sean Rolnick wrote:
There needs to be a separation of blood types tested. Where these people came from which pre-humans were they from. This has a lot to play on how these subjects play on this study. They all may have a different immune system and work in a different way than has been discovered prior.
On 2021-09-10 01:38:54, user Tanner wrote:
A limitation to consider: A control and experimental cohort of "unvaccinated" and "Vaccinated" does not take into account a large population of previously infected individuals. This would likely have a large impact on the infection rate of both the vaccinated and unvaccinated cohorts and help guide current policies being passed.
On 2021-09-04 06:31:10, user Philological wrote:
In this version of the paper the “U” shaped response curve between covid vaccine hesitancy and education level is still mentioned. It is clear from the updated statements in this version that the data set was mortally compromised by respondents falsely linking PhD education with vaccine hesitancy. This resulted in an avalanche of anti-vaccination invective in social media and online news media, in many cases justifying covid vaccine rejection based on the relevant findings in the paper. All references to PhD’s should be removed.
On 2021-09-04 23:26:57, user SomeGuyWithAWatch wrote:
+1 for recognizing Long Covid as neuro-covid.
On 2021-09-08 03:52:00, user Matt Lee wrote:
It would be informative to see the disease outcome comparison after removing patients from the study with acceptable exclusion criteria; an active immunocompromising condition or recent immunosuppressive therapy was used by Pfizer in their clinical trials. In addition, adjustment of the data for comorbidities would make the data more clinically meaningful.
Because the two treatment groups could not be controlled for comparable rates of comorbidities, it may make more sense to remove them from the comparison. It's unfortunate for data analysis, that 21.5% vs 7% of the unvaccinated & vaccinated, respectively, had diabetes, a notable co-morbidity for COVID-19. Only a subset of the Charlson Comorbidity Index Categories were evaluated in this study. Just as Pfizer showed # of participants with any Charlson comorbidity for each treatment group in Table S2 of the 6 mo outcome study, https://www.medrxiv.org/con..., such information added to Table 1 would be a valuable addition.
This data does not rule out the possibility that the differences from the Disease Outcome between vaccinated & unvaccinated could skewed by the higher population % with comorbidities in the unvaccinated group.
The differences in pneumonia in 53% vs 22% and in suppl. O2 required in 21% vs 3% in unvaccinated vs. vaccinated, respectively, may or may not still be statistically significant in the subset of patients from this study without any Charlson comorbidity.
On 2021-09-08 14:41:41, user Sherri Christian wrote:
Can you please provide details on the HD population (I assume HD stands for healthy donor)? It doesn't appear that CD24Fc treated patients were compared directly with HD. This is an important comparison, in my opinion.
On 2021-09-10 09:16:10, user Wolfgang Birkfellner wrote:
I posted this comment with a few questions on my side under the wrong paper initially ... so here it is again:
I am afraid that the statistical model of using a linear regression on exponential data is not fully adequate here.
First, using the logarithm of the antibody level introduces a bias. think of <br /> the specimens that have zero antibodies - after taking the logarithm, <br /> the value for these is -infinity, which renders every effort to <br /> determine a regression line totally useless. it is therefore not <br /> surprising that, for instance, the predicted value from the model for <br /> the antibody level at t0 is quite off - 6366 for the vaccinated <br /> specimens whereas the mean is found to be 12153 and the median is 9913 <br /> according to table 2a. I know that using a linear regression on <br /> logarithmic data is a common method but it has its pitfalls.
Second,<br /> the data do not follow a Gaussian distribution (look at the mean and <br /> the median in tables 2a and 2b), and apparently at least the median for <br /> the covalescent specimens does not even follow a simple exponential <br /> decay model; in table 2b, we see a rise of the median antibody titer <br /> from 490 (t0) to 586 (t1).
Third, it is somewhat disturbing that <br /> in table 2b, the IQR for the median titer of the covalescent patients at<br /> t6 is given as [140-8301} - the third quartile is ten times higher <br /> compared to the values at the other timepoints.
What I do see from<br /> the data indeed is that even six months after vaccination, the median <br /> antibody level of the vaccinated patients (447) is higher than the level<br /> for the covalescent patients (314). There is an indication that the <br /> titer might fall off more rapidly for the vaccinated cohort, but given <br /> the data as represented in the paper i consider this conclusion a bold <br /> one.
On 2024-01-19 09:50:31, user Dr. Hans-Joachim Kremer wrote:
I largely agree with WIlliam Bond. It is fair enough to show mean (instead of median) and SD in Table 1, but you definitively misplaced the estimates.
It is also a good idea to show subset analyses by age cohorts. To retain sufficient power, I would recommend A. confine this to W12 data, B. use three age cohorts: <50 (assumed to be healthy), 50-59 (in between) and >60 ( assumed to be less healthy).
You claimed to have performed multivariate logistic regression. OK. I would then expect clearly listing the variables to be adjusted for and anywhere the attribute "adjusted" in Table 3.
Then it would be nice to have, at least for W12 data, the unadjusted OR. The same for the 3 age cohorts suggested above.
On 2024-04-27 20:35:31, user TK wrote:
Wow! Long-overdue research on an important and debilitating condition. Huge thanks to all the researchers investigating!!
On 2024-04-27 20:39:23, user David Lockyer wrote:
It's very encouraging to read of some focused research being done into TSW. For those suffering It's really important that a separate diagnostic category is identified so that the condition can be treated appropriately. I look forward to seeing this peer reviewed and published.
On 2024-05-02 14:51:48, user Kathy Tullos wrote:
TSW research is imperative to validating (or invalidating) the patient experience. As it stands, the patient experience is invalidated by the medical community based on no research: "to date there have been no systematic clinical or mechanistic studies to distinguish TSW from other eczematous disorders." To confidently tell patients something is all in their heads while doing no research to prove or disprove that stance is truly unethical. Patients numbering in the thousands have been reporting these adverse effects for the past 10 years - including a cluster of new symptoms never experienced prior to topical steroid treatment - and it is roundly dismissed. "Steroid phobia" is thought to be the culprit. The finger is pointed at the patient for underusing, overusing, or going off treatment. The only reason patients go off this treatment is because the symptoms have escalated so extremely during treatment, there is nothing left to do but see if the treatment was the problem. Of course there is an acute phase after cessation of withdrawal. But after a protracted withdrawal phase, there is improvement. We see this pattern time and again in the patient community. All anecdotal of course. There has been no intellectual curiosity from the scientific medical community to drill down and see if there is validity or an explanation. That is why this study, and future studies that it will spark, is so very important. Thank you for listening to reports and for trying to understand and fix the problem. We need to connect patients with doctors and we can not do this if patients' reports are not believed. Doctors only really listen to other doctors. Another reason we need more research like this.
On 2024-05-19 04:01:18, user Natalie wrote:
Thank you so much for this research! I’m 20 months into TSW and so excited to see research like this finally being done. I would love to see this peer reviewed and officially published so that it is able to gain a wider audience and reach more of the patients and practitioners who desperately need to be made aware of this important information.
On 2024-05-01 23:32:45, user ppgardne wrote:
This is an excellent paper, showing a clear association between variation in RNU4-2 and NDD phenotypes. The enrichment of variation in the gene between undiagnosed NDD and population cohorts was remarkable.
I thought there were a few areas where the manuscript could be improved slightly.
* Figure 1: Clearly define the measures “genotype quality”, “allele balance” and “total coverage”. We can infer what these mean, but definitions of each in the method section would be helpful.
* Table 1: I spent some time gathering the population sizes for each of the count columns. Please add an extra row or two, giving the number of individuals in GEL NDD, Non-GEL NDD and the population cohort.
* The statement “Humans have multiple genes that encode the U4 snRNA, although only two of these, RNU4-2 and RNU4-1, are highly expressed in the human brain” is slightly inaccurate. The HGNC database and reference (https://doi.org/10.15252/em... "https://doi.org/10.15252/embj.2019103777)") list just those two functional copies of U4 in the human genome. There are ~100 annotated pseudogenes however.
* You state that there is “97.2% homology” between RNU4-1 & RNU4-2 – this is a wrong (but common) use of the term homology. You should have stated “similarity” instead.
* Figure 3: I understand that the BrainVar RNAseq data are from samples of human dorsolateral prefrontal cortex. This should be stated in the caption.
* Figure 3: you state that “expression of RNU4-1 & 2 is tightly correlated”. Looking at the figure it appears the tissues with higher expression are also the ones were more samples were taken. Was the potential confounding of sample depth and/leverage accounted for in the analysis?
* Figure 4: it is unclear what this heatmap is showing. Is it really normalised on a per-gene basis, or is the null for SNV densities derived from the 1,000 random intergenic sequences mentioned in the methods? That would seem to be a more useful measure of variant enrichment or paucity. The ordering of the sequences is odd too, why are the paralogous genes U4/U4ATAC, U1/U11, U2/U12, U5 etc not next to each other? Surely the paralogs are more comparable. What is the justification for an 18bp window? –Other than that is the size of the variable region in RNU4-2.
* The recurrence of n.64_65insT is fascinating. And speculation on the mechanism is very worthwhile. You mention early in the manuscript the possibility of slippage in homopolymer regions, but this is not mentioned again in the appropriate section. You mention local secondary structure as a possible driver, but there seems to be very little evidence to support this based on free energy modelling.
On 2024-05-02 18:05:18, user Keith Robison wrote:
Using mutagenesis of DNA to break up repeats in sequencing has an intellectual history that is not captured in this version and really should be - e.g. <br /> Unlocking hidden genomic sequence from NAR in 2004 and Facilitated sequence counting and assembly by template mutagenesis in PNAS in 2014 represent two different research groups; there may be others
On 2024-05-25 16:10:01, user Mark wrote:
Writing that the "simulation demonstrates that repeated boosters, given every few months, are needed to maintain this misleading impression of efficacy" (in their abstract) the authors build upon the assumption that (fully) vaccinated persons are miscategorized as "unvaccinated" for some period of time after they've received a repeated ‘booster’ vaccination.
I wonder if there is any example, research study or country which actually proceeded this way ...
On 2024-06-21 21:19:36, user WILLY CESAR RAMOS MUÑOZ wrote:
Published in BMC Cancer: https://bmccancer.biomedcen...
On 2024-07-04 09:09:56, user Rohit Satyam wrote:
I was wondering if you can also provide the major/minor sublineage assignment the authors obtained for the case studies included in the paper as a supplementary file.
On 2024-07-31 12:40:06, user David Curtis wrote:
The paper presents these findings as if they were novel but in fact the main result, an association of ITSN1 ptvs with Parkinson's, was published on the AstraZeneca PheWAS portal years ago: https://azphewas.com/geneView/ba08a93f-501e-44e6-a332-98ce2f852279/ITSN1/glr/binary The current paper does cite the PheWAS publication but without making it clear that the central results have previously been reported. What the current paper seems to do is to confirm the association in a new sample and an animal model but most readers would be unaware that the main evidence for association represents one finding from the previously reported PheWAS. Failing to mention that the results were obtained as part of the PheWAS is misleading because there were over 18,000 phenotypes tested. Without knowing this, the association results appear to be more strongly statistically significant than they actually are. In fact, correcting for the number of phenotypes tested as well as the number of genes and models tested would mean that the primary results at least would not be regarded as statistically significant. All these issues should be properly discussed.
On 2024-08-16 10:09:21, user Unclaimed wrote:
Article publsihed at Nature doi 10.1038/s41586-024-07769-3
On 2024-10-04 13:44:45, user Pablo wrote:
Very interesting work! Our recent work ( https://www.thelancet.com/journals/landig/article/PIIS2589-7500(24)00099-2/fulltext ) also explores a pathogen-agnostic mobility network to address the problem of sentinel site allocation in Brazil. How do you think this sort of approach could be translated into implementation policies in countries such as ours?
On 2024-10-15 08:51:45, user Philippe Arvers wrote:
In France too, according to the BEH (No. 1 January 2021) ????????<br /> https://beh.santepubliquefrance.fr/beh/2021/1/2021_1_1.html <br /> Smoking cessation without any help is the most common, followed by #vaping and then #TSN
On 2024-10-22 02:23:08, user Olivia Piraino wrote:
I really enjoyed reading your paper. This study shows that when it comes to identifying duration-response correlations and determining the minimum effective duration (MED) in phase II trials, model-based techniques like MCP-Mod and FP1 consistently outperform traditional qualitative methods like the Dunnet test. Because these model-based techniques utilize flexible statistical models, they reduce bias and variation and are more accurate in calculating duration-response curves and the MED. But the study also points out drawbacks, like the possibility of underestimating the MED in cases with small sample sizes, which raises the possibility of bias and variability. Although model-based methods are more precise, their practical application may be limited due to their complexity and the requirement for meticulous control of confidence intervals.
After reading your paper, I wonder if this approach would work for other long-term treatments for diseases like HIV. Also, how would these model-based approaches perform using real-world medical patient data, which often includes complex medical conditions, comorbidities, and variations in patient adherence compared to the controlled clinical trial environment? Do you think this will enhance model flexibility or create more challenges?
Overall, I enjoyed your pre-print and look forward to seeing more of your work in the future.
On 2024-10-31 12:46:20, user None wrote:
This has now been published in HGG Advances: https://doi.org/10.1016/j.xhgg.2024.100375
On 2024-11-30 22:32:43, user xPeer wrote:
Summary<br /> The preprint investigates the remodeling effects of icosapent ethyl (IPE) supplementation on plasma lipoproteins and its subsequent impact on cardiovascular disease (CVD) risk markers in normolipidemic individuals. The study finds that IPE supplementation significantly enhances eicosapentaenoic acid (EPA) levels in the plasma, reducing major CVD risk markers such as triglycerides, remnant cholesterol, and apoB levels. There are consistent alterations across all lipoprotein classes, influencing their lipidomes, reducing proteoglycan binding properties, and potentially decreasing the atherosclerotic risk. However, the study's small sample size and short duration limit the generalizability of findings.
Example: Expand the cohort size and extend the duration to assess long-term impacts and variability of EPA incorporation among different CVD risk groups (Discussion, Page 14).
Detailed Mechanistic Insights:<br /> The precise mechanisms by which IPE alters lipoprotein characteristics and its direct influence on cardiovascular outcomes remain unclear.
Example: Detailed mechanistic studies on how IPE-induced lipid species changes relate to atherosclerosis progression are needed (Results, Page 11).
Individual Variability Analysis:<br /> The study underscores substantial interindividual variability in response to IPE supplementation, calling for personalized treatment approaches.
Example: Investigate genomic or lifestyle factors contributing to variability in response to IPE (Results, Page 13).
Proteoglycan Binding and Aggregation:<br /> The study notes reduction in proteoglycan binding and different responses in LDL aggregation among participants but lacks detailed analysis.
Specific errors include inconsistent capitalization in headings and figure labels requiring standardization (Introduction, Page 2; Results, Page 8).
AI Content Analysis:
Overall, the preprint presents insightful preliminary findings on the cardioprotective impacts of IPE supplementation, recommending essential improvements and comprehensive validations for future extensive studies.
On 2024-12-09 20:43:18, user Louis El Khoury wrote:
Usually methylation changes in response to an environmental factor are slow. How is it possible that there is enough methylation change during the course of a single match to reduce the epigenetic age, and then return to baseline 24hrs later? This is not clear in the discussion section.
On 2025-02-12 19:57:09, user Aron Troen wrote:
Review Part I: Overview
Careful, comprehensive, and accurate evaluation of the emergency food supply available to conflict affected populations is crucial for the design and implementation of an effective humanitarian response in any war.
This study claims to model the caloric content and diversity of the food delivered to the Gaza enclave from October 2023 through August 2024 of the current war, and asks whether it was sufficient to provide for the needs of Gaza’s population.
To do so, the researchers construct a “retrospective model” of the per-capita calorie supply over time incorporating:
Unfortunately, the study suffers from fundamental flaws which invalidate its findings and conclusions.
In any model, simulations depend heavily on the validity of the selected data and of each of the model’s assumptions. This study makes multiple assumptions and relies on heavily on data from the UNRWA dashboard, which the authors and UNRWA acknowledge to be incomplete, and whose reliability is controversial. Notably, the UN data do not fully cover private sector food delivery, which comprise a large proportion (up to 40%) of the total available food supply. It does not make a serious effort to analyze additional data from COGAT that includes more complete coverage of the food supplied to Gaza. Of the URNWA data analyzed, the researchers assign food weights to pallets that underestimate the weight of food provided by as much as half (!) according to publicly available UN food supply requirements. These and other significant limitations, detailed below, are enough to raise serious concerns about the validity of the findings, and to limit the conclusions that may be reliably drawn from them.
However, an even more basic question must be asked: Why simulate or model the calorie supply, with all the uncertainty that the model’s multiple assumptions introduce into the findings, if the available energy can be simply calculated from the reported weight and type of foods supplied to Gaza, which can then be compared to the humanitarian standards for the energy requirement of emergency-affected populations?
Some of the limitations of the data and the uncertainty of the results are listed by the authors. However, merely acknowledging limitations is not sufficient to justify overreach in the discussion of the results and their policy implications. In their conclusions, the authors suggest that their study provides valid and useful evidence for a “forensic analysis” of claims that Israel has deliberately starved Gaza’s population, concluding that “Israel, as the de facto occupying power, did not ensure that sufficient food was consistently available to the population of Gaza…”. They further state that their findings will be used to estimate the “resulting effect on nutritional outcomes among Gazan children”. These conclusions are not supported by the findings and appear to reflect political motivation and bias. Indeed, contrary to the portrayal of the results, it is remarkable that the model shows that the overall caloric supply to the emergency-affected population of Gaza was adequate during the majority of the period analyzed, despite a brief shortfall, even with intense combat between Israel and Hamas, and despite the limitations of the model’s questionable assumptions and data.
Presenting simulations with greater certainty than they merit can be harmful. Past simulations made by the authors about the war in Gaza have proven erroneous (For example, in February they projected that total deaths from the conflict would reach between 58,260 to 85,750 deaths by August , whereas even the problematic Gaza MOH (Hamas) eventually reported a significantly lower number of 39,623 for the same period (see for example: https://gaza-projections.org/; https://www.washingtoninstitute.org/policy-analysis/gaza-fatality-data-has-become-completely-unreliable; https://henryjacksonsociety.org/publications/questionable-counting/ ). The gap between the authors’ past projections on the war and the available information ought to have given them pause before publishing highly consequential political conclusions from tentative simulations. The gravity of the crisis is severe enough without magnifying the uncertainty surrounding the available data. For a discussion of the harms associated with conflating simulated projections with reality, see for example, Beyar R, Skorecki K. Concerns regarding Gaza mortality estimates. Lancet. 2024 Nov 16;404(10466):1925-1927. doi: 10.1016/S0140-6736(24)01683-0. More recently, a US State Department statement reprimanded the irresponsible exaggeration of the food crisis by one of the key international humanitarian NGOs that provides data to the IPC: “At a time when inaccurate information is causing confusion and accusations, it is irresponsible to issue a report like this. We work day and night with the UN and our Israeli partners to meet humanitarian needs — which are great — and relying on inaccurate data is irresponsible.” ( https://il.usembassy.gov/statement-from-u-s-ambassador-jacob-lew-on-fews-net-report/ ).
Instead of providing clarity based on credible and verifiable research and analysis, this exercise is used for political advocacy in belittling the very serious challenges faced by Israel, humanitarian agencies and the private sector, who collectively have supplied massive quantities of food to the emergency-affected population of Gaza, despite the intense and ongoing war. It is always difficult to obtain accurate information during a war.
Real-time projections that recognize the inevitably incomplete data (beyond lip-service), with carefully stipulated assumptions and caveats, can be useful to inform prospective decision-making and humanitarian efforts in the face of uncertainty. In contrast, “retrospective modelling” based on blatantly cherry-picked data, questionable assumptions, and presenting simulated outcomes as truth to reach politically charged conclusions does not advance scholarly discourse, and has pernicious real-world consequences.
Comments on the Introduction<br /> The objectives of the study are not explicitly stated in the introduction. While the authors’ justified dismay over the humanitarian crisis in Gaza and their aim of assessing the food availability of Gaza is clear, the framing of the introduction (as well as the discussion and conclusions of the paper) is selective and tendentious, leaving the impression that rather than evaluating the food supply to Gaza during an intense conflict in order to provide valid scientific insight for improving the humanitarian response, the study is an exercise in political and ideological advocacy under the facade of academic research and analysis.
The highly selective introduction obscures more than it illuminates. It begins by asserting that “the population of the Gaza Strip has experienced seven decades of protracted conflict”. These seventy years (!) conflate fundamental historical transformations, from the time when Gaza was under Egyptian control until the 1967 war, in which Israel occupied Gaza and the West Bank, followed bythe October 1973 war, the first Palestinian intifada (1987-1991), Oslo Accords (1993) in which the Palestinian Authority was created and assumed control over Gaza (1994-2006), the second intifada (2000-2004) and Israel’s full unilateral withdrawal from Gaza (2005), the violent Hamas takeover in 2007, thousands of rockets launched at Israel and ensuing small wars, and Hamas’s construction of a vast underground military complex under Gaza. Reducing this long and complex history to a simple story of protracted conflict and implied victimization elides complex dimensions including rapid population growth from ~250,000 Gazans in 1950 to ~2.2 million in 2023, major improvements in health and nutrition achieved through cooperation between Palestinian and Israeli health professionals, and significant economic, social and political developments (for example: “Health in the occupied Palestinian territories”; Tulchinsky, Ted H et al. (2009) The Lancet, Volume 373, Issue 9678, 1843).
Mention of “70 years of conflict” is followed in the same breath with “16 years of enforced restrictions on trade and the movement of people and goods, including food [1]”. The reference given for this statement, which was authored by the UN conference on trade and development in January 2024 is a preliminary analysis of the impact of the current war on the destruction in Gaza. It does not mention restrictions on food. On the contrary, it refers to the massive provision of (food) aid to Gaza by the international community. Moreover, there is no mention that the 16 years of restrictions on Gaza were a response to the election of Hamas, a jihadist terror organization not only dedicated to the destruction of Israel, but also at odds with the PLO-led Palestinian Authority, which it violently overthrew in Gaza in 2006-2007. There is also no mention that during the 16 years since seizing power, Hamas instigated recurring wars against Israel in 2008-2009, 2012, 2014, 2021, and finally in October 2023. This glaring omission leaves the impression that restrictions on Gaza were arbitrary.
Hamas is only mentioned in a passing reference to “the 7 October Hamas attacks”, which serves as a point of departure for describing the massive destruction and harm inflicted on Gaza by Israel. There is no mention anywhere in the article of responsibility of the Hamas government in Gaza, for the consequences of their failed governance for their own civilian’s welfare ( https://www.nytimes.com/2024/09/13/us/politics/hamas-power-gaza-violence-israel.html) "https://www.nytimes.com/2024/09/13/us/politics/hamas-power-gaza-violence-israel.html)") . The absence of central details of the attack on Israel, which continued long after October 7th – over 1200 people brutally murdered and mutilated, 255 abducted, as well as the parallel bombardment of millions of Israelis with thousands of rockets and missiles – is a remarkable omission and reflects the biased political approach. Similarly, in framing the Israeli response as “large-scale aerial bombing and ground operations,” there is conspicuously no reference at all to the dilemmas posed by Hamas’ strategy of (ab)using the civilian population under their control as human shields, and of the hostages held by Hamas, rocket launchers, and an estimated 500 kilometres of underground military infrastructure constructed by Hamas under hospitals, schools, mosques, residences and agricultural areas in Gaza ( https://mwi.westpoint.edu/gazas-underground-hamass-entire-politico-military-strategy-rests-on-its-tunnels/) "https://mwi.westpoint.edu/gazas-underground-hamass-entire-politico-military-strategy-rests-on-its-tunnels/)") . In artificially removing this core information from the framing of the article, the rationale of Israel’s response and strategy in seeking to disarm Hamas is also erased, preventing the credible analysis of this complex tragedy, including its impact on food availability. <br /> The introduction proceeds to provide fatality figures in politically salient terms: "Israel has conducted large-scale aerial bombing and ground operations in Gaza, resulting in at least 41,272 deaths". The citation of a UN source for this figure creates the misleading perception that these claims are from a neutral source and that they were verified by the UN. However, OCHA cites these numbers with the disclaimer: "according to figures of Gaza's Hamas-run Ministry of Health, which have not been independently verified and may include Palestinian combatants who were killed." Notably, the authors fail to mention the IDF estimates of 17-20,000 combatants killed during this period, and with a natural death rate of ~5,500 people per year, the civilian death rate is lower than implied, although terrible enough without need for inflation.
In the second and third paragraphs, the introduction does provide background describing the baseline nutritional status of Gaza’s population, and the reported impact of the war. However, many of the statistics cite UN reports which are not always verifiable or impartial, and the presentation is selective, uncritical, and at times inaccurate. For example, the introduction states on p2. line 26-29 that "by December 2023, those who remained [in North Gaza and Gaza City governorates] appeared largely cut off from aid", because "the UN Relief and Works Agency for Palestine Refugees (UNRWA) last delivered food to the north on 23 January 2024, being then barred from further deliveries, while the UN World Food Programme (WFP) ceased its food convoy operations to the north on 20 January [21], only resuming these on a limited basis in March." This implies that, between 23 January and sometime in March no was food supplied to the two Northern Governorates, when in fact COGAT reports on private sector delivery of at least 150 food trucks to the North in this period ( https://gaza-aid-data.gov.il/media/qtvbs5u0/humanitarian-situation-in-gaza-cogat-assessment-mar-15.pdf) "https://gaza-aid-data.gov.il/media/qtvbs5u0/humanitarian-situation-in-gaza-cogat-assessment-mar-15.pdf)") .
The introduction places the onus for all food scarcity on Israel, asserting for example that “Israel has placed enhanced restrictions on aid flows and distributions, closing all but two southern crossing points into Gaza up to May 2024 and rejecting multiple consignments for ostensible security reasons [18]." This arguably misrepresents the complex and objectively challenging situation, including attacks, looting and hoarding of aid by Hamas, and omits the well-documented controversy and contrary evidence. Furthermore, the authors fail to mention that Erez crossing was destroyed by Hamas terrorists during the October 7th attack on Israeli borders and that this is the reason it was closed. Moreover, prior to the war, Erez was a pedestrian crossing, and extensive work by Israel in collaboration with the US, Jordan and international agencies, allowed its reconstruction and opening in April 2024 as a truck crossing.
On the specifics of food supply, the introduction cites IPC projections issued in December 2023 and March 2024, but ignores the FRC report published on June 4 ( https://www.ipcinfo.org/fileadmin/user_upload/ipcinfo/docs/documents/IPC_Famine_Review_Committee_Report_FEWS_NET_Gaza_4June2024.pdf) "https://www.ipcinfo.org/fileadmin/user_upload/ipcinfo/docs/documents/IPC_Famine_Review_Committee_Report_FEWS_NET_Gaza_4June2024.pdf)") acknowledging that the previous analyses were based on significant undercounting of the amount of aid. <br /> Furthermore, the authors fail to note that IPC reports are intended to sound the alarm and mobilize international action to prevent famine before it occurs, because once it occurs, it is often too late to save lives of those acutely affected. Despite the institutional processes designed to obtain political and technical consensus, such reports are often based on inevitably flawed and limited data from actors involved in the conflict. Given the contentious nature of the war in Gaza, projections made by the IPC and others have often been conflated with the actual situation, and abused to advance political agendas. [See for example: GM Steinberg and LD Klaff, “Politicization of Tragedy: The Case of the Gaza Conflict and Food Aid” in The American Journal of Clinical Nutrition 120 (2024) pp. 749-750; and a critique of the reports by Caner, INSS special publication July 2024 ( https://www.inss.org.il/wp-content/uploads/2024/07/special-publication-240724-1.pdf ); and by the Israel Ministry of Foreign Affairs https://www.gov.il/en/pages/transparency-and-methodology-issues-in-the-ipc-special-brief-of-18-march-2024 and https://www.gov.il/en/pages/the-third-ipc-report-on-gaza-june-2024-3-sep-2024 ]. Unfortunately, this study echoes the tendentious discourse. Examples of its selective and misleading use of the IPC reports include:
• "In December 2023 the Integrated Food Security Phase Classification (IPC)… classified 25% of the population in the northern governorates as experiencing catastrophic acute food insecurity, updating this projection to 55% in March 2024": Firstly, it is misleading to compare the "current" classification in Phase 5 in December (25%) with the projected classification in March (55%, although it was 50% in the actual report). The "current" classification in the March report was 30%. Secondly, and much more problematic, the article doesn't refer to the IPC reports which covered the period from March to September (published in June and October) which pointed to a steady decline in the population classified in phase 5 to 15% in June and 6% in September-October.<br /> • "In March 2024 Oxfam claimed that the population in northern Gaza had only 245 Kcal per person-day available": apart from the referral to March, the press release cited here does meet basic academic standards ( https://www.oxfam.org/en/press-releases/people-northern-gaza-forced-survive-245-calories-day-less-can-beans-oxfam) "https://www.oxfam.org/en/press-releases/people-northern-gaza-forced-survive-245-calories-day-less-can-beans-oxfam)") . Although it says that "Oxfam’s analysis is based on the latest available data used in the recent Integrated Food Security Phase Classification (IPC) analysis for the Gaza Strip.", it seems to refer to a graph on page 8 of the March 18th report presenting similar numbers for Northern Gaza, yet no source is given for that graph, nor is it clear who conducted the analysis, based on which data and using which methodology. The IPC report only describes the study in vague terms: "An in-depth analysis of the border crossing manifest allowed to generate approximate kilocalories values per truck and per unit of analysis then distributed per area, using information provided by OCHA and the Food Security Sector." It should be noted that following criticism from Israel on this improper conduct which violated the IPC's standards of transparency, the subsequent IPC reports on Gaza omit any caloric analyses of aid. The 245 Kcal per person-day is about a quarter of the lowest figure for Northern Gaza in this article (1000 Kcal) which only highlights that Oxfam analysis is detached from reality and not worthy of being cited. This value is contrasted with “Israeli academics, working with data from the Israeli Ministry of Defence’s Coordination of Government Activities in the Territories (COGAT) agency, put this figure at 3160 for all of Gaza during January-April 2024 [25] (p2. l41).” The citation is out of date. A revised study assessing the food supply for the period of January-July 2024 is in press. The nationality of the authors of the cited research ought to be irrelevant.
• "Since May 2024, the re-opening of crossings into northern Gaza and increased food deliveries appeared to mitigate food insecurity, though the IPC projected that 22% of Gaza would remain in catastrophic food insecurity conditions between June and September": However, the authors of this article downplay this acknowledgment of the improvement by citing a reference to a projection which proved drastically wrong. While the IPC report from June projected 22% in phase 5 in September, the IPC report published in October found that the actual share in September was 6%. However, the article does conclude that "a steep increase in food availability occurred from late April 2024, coinciding with the reopening of crossings into northern Gaza, and by June acute malnutrition prevalence appeared to be relatively low, despite very limited dietary diversity." Thus, based on the authors’ inclusion of this data, their reference to the 22% should be removed and replaced by the actual decline to 6% as reported in the October IPC report.
• "the consumer price index for food rising from 210 pre-war to 600 by March 2024" citing the WFP's unofficial calculations. While it is true that according to the official statistics from the Palestinian Central Bureau of Statistics the price index for food nearly tripled from September 2023 to March 2024 following the outbreak of the war, the index subsequently decreased by 28 percent from 332.70 to 240.01 as the food supply improved during the analysis period ( https://data.humdata.org/dataset/state-of-palestine-consumer-price-index) "https://data.humdata.org/dataset/state-of-palestine-consumer-price-index)") .<br /> • An analysis of the IPC report from June by the Israel Ministry of Foreign affairs highlights several positive trends in the IPC's main outcome indicators between March and July ( https://www.gov.il/en/pages/the-third-ipc-report-on-gaza-june-2024-3-sep-2024) "https://www.gov.il/en/pages/the-third-ipc-report-on-gaza-june-2024-3-sep-2024)") . The positive trends reflect the impact of the humanitarian efforts which are analyzed in this study and which should not be ignored. <br /> If the purpose of the paper is to contribute to an understanding of how to fix the problem rather than the blame, then the framing of the introduction and subsequent discussion ought to recognize that Hamas exercises agency and has made decisions that have contributed to the plight of the Gazan population whom they govern, including with regard to the nutritional aspect of the humanitarian crisis. A more balanced study could be helpful to further understanding and foster cooperation instead of inflaming controversy. This would help address the present crisis and advance future rehabilitation. In short, the introduction (and the rest of the paper) should present a balanced account of the knowns and unknowns regarding the present food security crisis, the challenges of obtaining valid and verifiable data, which also plagues the current analysis, and the need for clarity, specifically with regard to the adequacy of the international humanitarian effort in supplying food to the emergency-affected population.
On 2025-02-13 03:06:12, user Metin Çinaroglu wrote:
Update on Manuscript Status
This manuscript was initially preprinted as part of its submission to another journal. Following substantial revisions, including the removal of one author (with consent) and significant modifications to the manuscript, it was subsequently resubmitted and accepted for publication in BMC Public Health. It is now in the process of publication.
Since medRxiv does not allow withdrawals, we would like to note that this preprint does not fully reflect the final published version. Readers are encouraged to refer to the forthcoming article in BMC Public Health for the most updated and peer-reviewed version. Once available, we will provide the DOI for the published article.
For transparency, we acknowledge the differences between this preprint and the final published manuscript and appreciate the understanding of the research community.
On 2025-02-24 23:42:40, user Stephen Goldstein wrote:
Manuscript summary
The authors report a small study comparing patients with “post-vaccination syndrome” or “PVS” with vaccinated, healthy controls. They used a variety of immunological techniques and report they have identified potential immune signatures in PVS patients, which may reflect an underlying mechanism of this condition.
Personal disclaimer
This manuscript has received considerable attention and attracted much commentary, including critical commentary from myself on twitter (@stgoldst). I was immediately skeptical of these findings given the attention to it, small study size, and amplification by anti-vaccine activists. However, the potential for vaccine injury is a serious matter, so a rigorous review of this manuscript is a critical need. I attempt here to account for my biases, and to check for these I used a Google AI model to conduct an orthogonal review. That is posted separately.
Review
Overview
This study described by this manuscript is methodologically flawed to a degree that undermines the authors’ stated goal to identify biomarkers for post-vaccination syndrome (PVS). These flaws are systematic, ingrained into the study design, and compounded by analytic flaws throughout the manuscript. As is, this study provides weakly informative data at best towards understanding chronic illness following vaccination. The methodological flaws are listed below and subsequently expanded upon.
The study provides no evidence for a causal link
PVS and control cohorts are very small, and even smaller when stratified by infection status.
The PVS cohort comprised only 44 patients originally, and was reduced to 39 due to pharmacological inhibition in 2 patients. The authors acknowledge that due to the small size of the study and its exploratory nature they did not conduct a power analysis. They acknowledge the difficulty in producing robust results due to the sample size. Despite acknowledging these problems, the authors repeatedly invoke the statistical significance of various analyses and in some cases rely on extremely involved statistical testing to identify weak signals. This presents an impression that the authors understand the inability, baked in from the start, of the study to be informative yet press ahead anyway.
he authors stratify the cohorts by infection status, with the primary determination based on serological status of anti-nucleocapsid (N) antibodies. The study participants were recruited in December 2022 at the earliest, nearly 3 years after the first SARS-CoV-2 infections were identified in the United States. Given the expected decline in serum antibody titers over time, it’s likely that people infected in the first year of the pandemic (and possibly even later into the pandemic) would test seronegative. Therefore, the -I cohorts likely include individuals who were in fact infected with SARS-CoV-2 at some point. This is a critical issue. The number of individuals without infection history is likely even smaller than presented, reducing the utility of stratification. In addition, this may actually confound the ability to disentangle the effects of vaccination vs infection in the development of chronic illness. It would be difficult to methodologically correct for this without a prospective longitudinal study. However, larger sample sizes might allow researchers to mitigate its impact. Given these sample sizes and the inability to reliable sort by prior infection status, the issue precludes making robust inferences from the data.
The authors describe the health of study participants based on GH VAS scores and note that PVS participants were in worse health than the control participants. In the Discussion, the authors expand on this, noting that PVS participants also had worse health than the U.S. general population. Given the real potential for other disease processes to impact every one of the biomarkers tested, the lack of unvaccinated, chronically ill participants (reporting the same syndromic profile as PVS patients) confounds any correlates between these biomarkers and vaccination. The study analyses are uninterpretable with respect to the impact of vaccination on health, as a result.
PVS was previously described by some of the same authors based on self-reported chronic sequelae following vaccination. This definition is then relied upon in this study. However, many of these symptoms are non-specific and certainly there is no evidence, given the lack of complete overlap, that they represent a single syndrome. There does not appear to be any clinical assessment to verify any of them. This is a repeated issue with descriptive studies of long covid (PACS) and now PVS, and I acknowledge the inherent challenges in establishing other criteria. Nevertheless, it represents a major problem in trying to describe a unified syndrome downstream of vaccination.
Throughout the manuscript the authors describe differences between PVS and patient cohorts solely through the p-value returned by statistical testing. Looking at the figures themselves the effect sizes turn out to be extremely small in virtually every case. Small effect sizes don’t mean there is no biological significance, but the authors in this study expend no effort to offer context or even a coherent hypothesis for why these effect sizes are significant. Expecting the reader to favorably interpret the data, or indeed interpret it all, based purely on p-values is…disconcerting. It’s not clear in the writing that the authors even consider effect sizes to be relevant, or if getting a sufficiently small p-value is good enough to report and believe a major finding. I’m not confident that the authors really interpreted the data to any depth themselves.
There is simply no causality evident in the data or really presented by the authors. Given the generally poor health of the PVS participants, all of the elevated inflammatory biomarkers and the elevated EBV reactivity could all be due to varied other disease processes, infectious or not. One clear example of this is Figure 4K where the authors correlate EBVgp42 reactivity with the percentage of CD8+ T cells producing TNF?. The Correlation R value is 0.47, indicating a weak to moderate link. Because EBV reactivation is tightly linked to general stress, the weakness of this correlation is highly suggestive of other disease processes making a significant contribution, or the PVS link being artifactual. The authors make no effort to account for this.
Specific Points
References 16 and 18 need to be corrected
“interaction with full-length S, its subunits (S1, S2), and/or <br /> peptide fragments with host molecules may result in <br /> prolonged symptoms in certain individuals16.”<br /> -Ref16 is a study describing circulating spike and S1 <br /> following vaccination, but does not mention anything about<br /> prolonged symptoms.
“Recently, a subset of non-classical monocytes has been shown to harbor S protein in patients with PVS18.” <br /> -Ref18 is a study on PACS (post-acute covid-19 sequelae) <br /> and does not mention vaccination or post-vaccination <br /> syndrome<br /> -Ctrl+F for “vaccine” “vaccination” “PVS” returns no results in <br /> this manuscript
Figure 3 on the kinetics of serological findings is generally confusing<br /> -For Control and PVS+I groups the authors report no decline <br /> in anti-spike antibodies over the course of months to year. <br /> -This runs counter to basic immunological principles and <br /> robust, repeatable findings with respect to anti-SARS-CoV-2<br /> spike antibodies in particular<br /> -One explanation for this is subsequent mild infections that <br /> boost antibody levels, but there are no spikes evident, but <br /> rather a steady maintenance.<br /> -The exception to this is PVS-I antibodies which decline at <br /> what is to the naked eye a normal rate. <br /> -This suggests an issue with the control or PVS+I cohorts, or <br /> a disturbing indication that they are not representative of the <br /> immunological state in their respective populations. Due to <br /> the small sample size, this seems likely<br /> -The authors should explain that because the PVS-I <br /> participants weren’t infected, their “days since post-<br /> exposure/vaccination” data are identical. Absent that, it’s <br /> confusing to notice that the PVS-I data in rows B and C are <br /> identical and raises concern about duplication in figures
The authors don’t describe the rationale for the EBV coinfection analysis displayed in Figure 4, and so there’s no way for the reader to interpret what (if any) significance to ascribe to it.<br /> -Figure 4D shows a small but statistically significant <br /> increase in IgG against EBVgp42 for PVS cohort relative to <br /> controls – however...<br /> -When the PVS cohort is stratified by prior infection status <br /> there is no statistically significant difference<br /> -This make it really difficult to interpret the difference when<br /> the PVS group remains together<br /> -It raises the question for me of whether the statistical <br /> significance is just sensitive to the number of data points,<br /> which for me makes it not robust<br /> -Again – as throughout the paper no biological context is<br /> given
Even the correlation between EBVgp42 in serum and EBVgp42 antibody reactivity is low<br /> -Again very difficult data to interpret and unclear what the <br /> biological significance would be<br /> -Problems with the correlation analysis in Figure 4K were <br /> discussed above<br /> Figure S4C is discussed in the text, but briefly and important data is ignored<br /> -It appears true that PVS participants have elevated<br /> autoantibodies of IgM and IgA isotypes, but their IgG <br /> autoantibodies are actually similar to controls<br /> -Not clear if there might be a class switching defect that <br /> could be related to a pathogenic process, or other<br /> explanation – the authors don’t address<br /> -The authors just say PVS patients just have autoantibodies,<br /> which obfuscates their own data that it’s isotype specific<br /> The interpretation of Figure 5C is also strange – most PVS patients have no circulating anti-S1 antibodies and the statistically significant difference is driven by a minority who do<br /> -The authors state there’s a difference without any effort to<br /> interpret it<br /> -This suggests that PVS, which the authors are trying to<br /> characterize as one syndrome, is either not one thing, or the<br /> presence or absence of anti-spike antibodies is ancillary<br /> -Unfortunately the authors gloss over any nuance in the data<br /> The data on specific biomarkers in Figure 5H is based on such small sample sizes I question whether it was even appropriate to do this analysis at all<br /> -To be clear, the issue isn’t whether the question is worth<br /> asking, it is. The issue is that one should not do an analysis<br /> that is so underpowered it will be definitionally <br /> uninterpretable<br /> -The fact that the authors had to jump through statistical<br /> hoops to find a statistically significant effect is concerning <br /> -the fact this includes a sub-group of only three patients is <br /> just methodologically inappropriate.<br /> Given the authors’ use of machine learning failed to reveal any coherent set of biomarkers further argues against the contention that PVS is a definable syndrome<br /> -Or, that this study is so small it lacks value in defining the <br /> syndrome
Final summary
Ultimately this study adds little value, at best, towards understanding post-vaccination sequelae experience and reported by some individuals. At worst, it injects claims and interpretations into the field and discourse that are unfounded, and will ultimately slow efforts to help patients. These results have already been used to advance anti-vaccine narratives in online discourse. If the data were robust, no one could complain. Because the data are not, it is tragic. Ultimately, there is no compelling evidence in this paper for an immunological signature associated with chronic illness following vaccination. Perhaps reflecting this, the authors provide almost no biological context for any of their findings, often reporting data merely as a p-value with no comment on the effect size (whether large or small). This leaves it unclear to a reader whether the authors are even aware of flaws in their work. Given the methodological flaws of this study, it is a questionable investment for researchers to follow up on it in a targeted way. Rather, well-powered, controlled, and methodologically sound studies should be conducted at scale to enable actionable findings to be made.
On 2025-02-28 22:17:07, user Brian wrote:
I’m a nobody, however I’m able to use the resources which are at my disposal to better understand this study. I have constructed the following logical explanation. I thoroughly invite anyone to dismantle this explanation. It is to the best of my knowledge and understanding that I’ve created this.
It found that CD4 T cells are reduced, and TNFa producing CD8 T cells are increased. It found that cDC2 cells were reduced while non classical monocytes were elevated. To also include that elevated cytokines and IgG subclass shifts did not occur. In a healthy immune system, elevated cytokines and IgG subclass shifts indicate a healthy immune response. Furthermore, a reduction of cDC2 cells means that without sufficient numbers of cDC2 cells, the body struggles to activate T cells effectively, which is key for a strong immune response. Next, elevated non classical monocytes means that the body is in a state of immune activation, but instead or responding efficiently to the threat (due to a lack of other immune cells like cDC2 cells), the system is stuck in a more passive or inflammatory state. And let’s not forget AIDS is characterized by a reduction of CD4 T cells and elevated TNFa-producing CD8 T cells. I rest my case.
On 2025-03-01 16:56:25, user andreaclovephd wrote:
Does this actually identify persistent immune dysfunction after COVID-19 vaccination? No.
The big takeaways:
The study did not accurately correct for past infection. The methods used to “exclude” past infection is not accurate–the data presented suggest everyone has similar history of past infection, which means the PVS symptoms reported by participants cannot be attributed to vaccination.
The study didn’t actually assess T cell exhaustion. This would have needed to show markers of T cell exhaustion (TIM-3, CTLA-4, PD-1, etc) combined with impaired function: cytokine levels, <br /> proliferation, metabolic defect, & gene expression changes. They don’t do any of this. IFN-? and TNF-? are comparable between groups and suggest activated T cells, not exhausted.
They did not use a method to assess EBV reactivation. They assess serology, not EBV replication, which is required to show reactivation.
CD4 T cell populations aren’t meaningfully different between groups & are within normal ranges for healthy individuals.
On 2025-03-05 11:17:40, user Eva van Heese wrote:
The peer-reviewed version of this manuscript is now published at: https://onlinelibrary.wiley.com/doi/full/10.1111/jsr.14479
On 2025-03-14 10:52:07, user Sasan Hekmat wrote:
The discussion of the “Mostaan 110” device is particularly problematic; the paper relies on this debunked technology as a symbol of science-related populism despite clear evidence that the Iranian Ministry of Health has rejected it, thereby misrepresenting the facts.
One of the major weaknesses of the paper is its failure to clearly define central concepts like “science-related populism,” leaving readers with ambiguous terms that dilute the precision and impact of the argument.
The manuscript’s reliance on media reports and non-peer-reviewed sources to substantiate key claims undermines its scientific rigor, as these types of sources are inherently more prone to bias than rigorously vetted academic literature.