37 Matching Annotations
  1. Dec 2019
    1. Figure2

      A bit hard to read because there is both vertical and horizontal flow in the diagram. Also, for the final "no" conclusion, I'd recommend being a bit more critical by stating something such as "avoid making inferential claims with results" (which I think is perfectly justified based on all of the qualifiers that lead to that result in the flow diagram)

    2. Note also that a directional hypothesis allows for more high-powered one-sidedsignificance tests.

      Furthermore, a preregistration is the perfect justification for use of 1 tailed tests, which in some circles are frowned upon because of their misuse via p-hacking.

    3. preregister

      Same with above comment, I'd recommend avoiding the register term unless a registry is involved.

    4. internal preregistrationthat is performed by an individual researcher or research group, and that is not published on apublic platform

      I'd encourage you not to describe this as a preregistration, as the "registration" part implies/requires that the plan be posted on a registry that eventually becomes public and searchable. In this use case, I recommend using the term "pre-analysis plan" as that is essentially a preregistration document that does not imply that it was posted to a registry.

    5. forinstance by allowing researchers to plan using conditional stopping rules that can reduce therequired sample size
  2. Oct 2019
    1. The vertical axis on this figure is misleading (suggests that fatalities have double instead of increasing 50%).

  3. May 2019
    1. Therefore, Open Science practices that have been developed forcon rmatory experimental research { particularly preregistration { are clearly and readilyapplicable to the Model Application category, and existing preregistration templates couldbe adapted for Model Application with only minor amendments.

      I think the Model Application, Comparison, Eval, and Develop framework would lend itself nicely to a table that includes basic definitions and recommendations for the degree to which prereg or process transparency would be relevant.

    1. I really recommend adding your local library to the list. For example, my local library has a "what should I read next?" form- let the librarian know what you've enjoyed or are in the mood for and they'll respond within a few hours. Of course you can do it in person, but this gives them time to think and search. The recommendations a good librarian gives you beats Goodreads or Amazon. https://jmrl.org/wdirn.htm

    1. efficient

      I think this should be "sufficient" as it is more efficient in one aspect (quicker than reviewing adherence)

    2. Given the observed low level of adherence, and the observed low disclosure rate, a preregistered study does not contain more interpretable results than a non-preregistered one, at least not currently.

      I disagree with this conclusion. Your work demonstrates that having the preregistration allows a reader to make this determination. Obviously it is better to disclose deviations and for authors to walk a reader through how those deviations affect the interpretability of the results, but even when that doesn't happen you were able to make that assessment (although with a lot more work on your part). Without the preregistration, this investigation is impossible to do.

    3. Discussion

      I think you should compare these findings to those being seen in clinical research, where preregistration has been required by law for a few decades. See http://compare-trials.org/ and https://trialsjournal.biomedcentral.com/articles/10.1186/s13063-019-3173-2

    4. Disclose or die

      We're working on ways to template these disclosures and will use your findings in that process, given that this is some good data on the frequency of various undisclosed elements. If you have recommendations for how that should be structured, please add a recommended template here. Based on your categories, that could look something like: "Please disclose any deviations from your preregistered plan in the following 8 areas..."

    5. Undisclosed deviation(s)

      Can you break this category down by number of deviations? 1, 2, 3, 4+ would give a better overview of the scope of the problem then simply 1+ or none.

    6. 10 (37%)

      Can you break down how many undisclosed deviations in sample size were above versus below the preregistered sample size or the magnitudes of these deviations. In the example provided ("we expect 600" but then report 616, the reader is not likely to see too much room for this to be caused by undisclosed stopping and hunting for significance, whereas sample sizes that are of higher magnitude are more worrying. I suppose it's outside of the purpose of this (excellent!) study to start to weigh the seriousness of any given deviation, but I suspect that many deviations are perceived to be too small to mention (which can obviously lead to a slippery slope...)

    1. The problem with narrow pre-specification and extensive exploratory analysis is that, in practice, there are not enough resources to conduct repeated streams of separate trials simply to solve the pre-specification issue. Budgets for social science research are several orders of magnitude smaller than for medical research—and even in medicine, some journals would acknowledge that for many less-common areas, the exploratory results may be the only results that the scientific community will have.7 The difference in magnitudes here is enormous: the registry for medical trials, http://clinicaltrials.gov, currently lists over 176,000 studies registered since the site was launched; by comparison, a reasonable estimate for the number of randomized controlled field experiments conducted in social science over a similar period is on the order of 1,000

      The solution to this problem is not to lower standards for the work that is able to be conducted, but to conduct them with as much rigor and transparency as possible.

    2. For example, interim looks at the data can be used in a medical trial to see if a drug is causing an adverse reaction, to know if the trial should be stopped. One can similarly imagine that a business or government that is partnering with a social science researcher in a trial may require ongoing analysis of the trial to ensure that the experiment is not actively harming their business or program. Often follow-up trials need to be planned before a trial is complete, so interim looks at the data can be useful for that purpose as well.

      These interim checks are encouraged as part of a pre-analysis plan, particularly in drug trials because the cost of harm is so great, but also as you mention in any intervention study it's good to monitor for particular positive or negative consequences. These checks can and should be pre-specified in order to control the type 1 error rate. Here's a good explainer: https://onlinelibrary.wiley.com/doi/full/10.1002/ejsp.2023

  4. Jan 2019
    1. I think this is a great point of view that demonstrates the importance of using the right tool for the job. Particularly in this type of model development process, the goal is to transparently log dozens (hundreds? thousands??) of decisions, which is served well by a software-commit approach.

      Putting a specific model to a specific test is a different process whereby preregistration can be much more useful (as said above).

    1. Might a simple solution be to include in templates more spots of robustness checks? This reminds good scientists to think "If I get a positive result, how should I respond? How would I 'normally' respond here by look at different ways to doubt my results". Also, it will introduce 'bad scientists' to the concept of self-doubt via robustness checks.

    1. Thanks for posting this series! Your hypothetical did lead me to ponder quite a bit. I think it misses an important point: the hypothetical context that a group of analyses would face under real world conditions.

      In the red card study, the analyst teams were not facing the prospect of publication bias, they were merely tasked with answering a specific research question given a dataset. Had the intervention been two groups: 15 teams to tasked with getting published as quickly as possible using any analytical technique that seems best and 14 teams preregister the best analytical technique to answer the question given the variables available (with data provided after preregistration), then I certainly would trust the preregistered results more than those that were under pressure to find the most exciting results possible. Knowing that one arm of the group is only likely to come up with the most extreme answer certainly changes my opinion about its credibility.

      The act of preregistration is not magical. It does not, by itself, change the future rigor of the results*. It merely creates a clear distinction between planned and unplanned analyses. Is the reviewer-recommended, unplanned analysis better than the preregistered one? Maybe**. But readers are only able to make that determination if they see "I planned X (results are Y), I ended up doing Z (results are Q)" instead of the status quo of only showing the final analytical decision.

      Finally, I think adding a multi-verse approach of presenting the results of all possible analyses is a great recommendation. Open data is likely to enable such a future.

      *Well, perhaps it does insomuch as thorough planning improves any process. Also not mentioned in the above post is the benefit of opening the file drawer. **I'm much more likely to think so if the reviewer makes that recommendation before knowing the main trends in the dataset.

  5. Nov 2018
  6. Aug 2018
    1. nterview data may contain indirect identifiers such as the general location of a person’s home or the type of workplace where they work.

      It might be good to give an example of this. Knowing where I live and where I work might be identifiable if the organization is small. Adding a third piece of knowledge (strong political views, or type of car I drive...) could re-identify many more people.

    2. Stepped

      It might be worth adding some resources for how to accomplish this. These protected access repositories provide this type of service as a 3rd party protection to identifiable information: https://osf.io/tvyxz/wiki/8.%20Approved%20Protected%20Access%20Repositories/

    1. Note, however, that preregistration can be done in different levels of detail, and several preregistration formats and templates have been suggested(e.g., 32). Strategic use of flexibility in analysescan only truly be avoidedif the preregistration contains a high level of detail, and this is currently often not the case

      I'd also add that even detailed preregistrations are not sufficient. They must be followed, deviations must be justifiable, and reviewers must check those. Finally, authors should not be penalized for deviations in a system where prereg isn't common because that would further incentive unreported flexibility/p-hacking by those who didn't preregister.

    1. If there is existing content in Wikis, Components, Links, then those fields might need to be expanded because some projects use those a TON whereas others only rely on the registration forms for content. If there is no content there, then it would be OK not have any preview of that content.

    2. LAST EDITED

      We should probably just be removed until (and if) we have the ability to edit registrations (which I know is a bit down the road).

    3. Provide the working title of your study. It is helpful if this is the same title that you submit for publication of your final manuscript, but it is not a requirement.

      I think it is OK to NOT include these notes and instructions to users. These are really intended as instructions to the authors and do not need to appear in the registration forms once they are registered. I know they are currently showing up in the registration forms, but now seems an OK time to remove them. "Title" "Authors" "Research Questions" is OK, without the extra explanatory text.

    4. PUBLICATION DOI

      Presumably this will be an updatable field in these forms, correct? As an FYI to devs: publication is expected to occur many months or years after the registration is created.

    1. publicly humiliated him on Twitter

      His colleagues asked him for data underlying reported results and he said no. I don't think that is humiliating. I guess it is public and the reply is self-serving, but I don't see how colleagues could have been less embarrassing.

    2. 3 (release of data and materials is prerequisite to publication)

      Level 2 is actually the required release of data underlying reported findings. Level 3 requires independent verification that shared data and code can replicate reported findings. See here for 6 journals that do that step https://osf.io/kgnva/wiki/home/

    3. 0 (no Open Science policy

      0 also include "encouragements" to share data, which have been repeatedly shown to be ineffective.

    4. They created a scheme wherein journals would be graded on their commitment to Open Science.

      The TOP Guidelines provide a modular and tiered set of polices that journals or funders can use to implement policies that fit with their norms or resources (higher tiers of course require more resources). See https://cos.io/top

    1. Foundation

      "Framework" instead of "Foundation." Better would be "...as provided by the Center for Open Science on the OSF..."

    2. ollected and analyzed.

      A recommended citation (and some great additional rationale) for standard operating procedures is here: Lin, W., & Green, D. P. (2016). Standard Operating Procedures: A Safety Net for Pre-Analysis Plans. PS: Political Science & Politics, 49(3), 495–500. https://doi.org/10.1017/S1049096516000810

  7. May 2018
    1. we continuously increased the number of animals until statistical significance was reached to support our conclusions. F

      This inflates the false discovery rate. If there are planned, and pre-specified stopping points for checking for significance, this is permitted (see Lakens 2014 10.1002/ejsp.2023). Otherwise, sample size should be determined in advance and results should be reported regardless of outcome.