6 Matching Annotations
  1. Jul 2018
    1. On 2016 Nov 01, Melissa Rethlefsen commented:

      I thank Dr. Thombs for his responses, particularly for pointing out the location of the search strategy in the appendix of Thombs BD, 2014. I am still uncertain whether the search strategies in question were the ones used to validate whether the primary studies would be retrieved ("In addition, for all studies listed in MEDLINE, we checked whether the study would be retrieved using a previously published peer-reviewed search [9].") for two reasons: 1) The cited study (Sampson M, 2011, about the method of validation) does not include the search strategy Dr. Thombs notes below, though the strategy is cited previously when the search to identify meta-analyses meeting the inclusion criteria was discussed, and 2) The search strategy in Thombs BD, 2014 is very specific to the "Patient Health Questionnaire." Was this search strategy modified to include other instruments? Or was the Patient Health Questionnaire the only depression screening tool in this project? It appeared as though other scales were included, such as the Geriatric Depression Scale and the Hospital Anxiety and Depression Scale, hence my confusion.

      I greatly appreciate the information about the reduction in the number of citations to examine; this is indeed highly beneficial information. I am curious whether the high number of citations came from primarily the inclusion of one or more Web of Science databases? Looking at the Thombs BD, 2014 appendix, multiple databases (SCI-EXPANDED, SSCI, A&HCI, CPCI-S, CPCI-SSH) were searched in the Web of Science platform. Were one or more of those a big contributor to extra citations retrieved?

      Though Dr. Thombs and his colleagues make excellent points about the need to maximize resources at the expense of completeness, which I fully agree with, my concern is that studies which do post-hoc analysis of database contributions to systematic reviews lead those without information retrieval expertise to believe that searching one database is comprehensive, when in fact, the skill of the searcher is the primary factor in recall and precision. Most systematic review teams do not have librarians or information specialists, much less ones with the skill and experience of Dr. Kloda. I appreciate that Dr. Thombs acknowledges the importance of including information specialists or librarians on systematic review teams, and I agree with him that the use of previously published, validated searches is a highly promising method for reducing resource consumption in systematic reviews.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Oct 26, Brett D Thombs commented:

      We thank Ms. Rethlefson for her comments on our study. We agree with her about the importance of working with a skilled information specialist or librarian on all systematic reviews and that, as she notes, the quality of searches is often very poor in systematic reviews. She has correctly noted some of the limitations in our study, as we did in the study itself. We do not share Ms. Rethlefson’s concern with our use of what she refers to as an “uncited search strategy in an unspecified version of MEDLINE on the Ovid SP platform.” The full peer-reviewed search strategy that we used is provided in the appendix of the systematic review protocol that we cited (1). Ms. Rethlefson seems to criticize this approach because it “can only find 92% of the included articles, versus the 94% available in the database.” Systematic reviews are done for different purposes, and there is always a balance between resource consumption and completeness. In many cases, identifying 94% of all test accuracy evidence will be sufficient, and, in those cases, identifying 92% is not substantively different. Ms. Rethlefson questioned whether searching only MEDLINE would indeed reduce the number of citations and the burden in evaluating them. She is correct that we did not assess that. However, based on our initial search (not including updates) for studies of the diagnostic test accuracy of the Patient Health Questionnaire (1), using MEDLINE only would have cut the total number of citations to process from 6449 to 1389 (22%) compared to searching MEDLINE, PsycINFO, and Web of Science. Thus, it does seem evident, that in this area of research, using such a strategy would have a significant impact on resource use. Whether or not it is the right choice depends on the specific purposes of the systematic review and would be conditional on using a well-designed, peer-reviewed search.

      (1) Thombs BD, Benedetti A, Kloda LA, et al. The diagnostic accuracy of the Patient Health Questionnaire-2 (PHQ-2), Patient Health Questionnaire-8 (PHQ-8), and Patient Health Questionnaire-9 (PHQ-9) for detecting major depression: protocol for a systematic review and individual patient data meta-analyses. Systematic Reviews. 2014;3:124.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Oct 21, Melissa Rethlefsen commented:

      The authors are asking an important question—which database(s) should be searched in a systematic review? Current guidance from the Cochrane Collaboration, the Institute of Medicine, and most information retrieval specialists suggests that searching multiple databases is a necessity for a comprehensive search of the literature, but searching multiple databases can be time-consuming and may result in more citations than are manageable to review. In this paper, the authors posit that searching MEDLINE alone would be sufficient to locate relevant studies when conducting systematic reviews with meta-analysis on depression screening tools.

      Though the authors’ methodology is detailed, one limitation noted in the paper was noted as, “we were unable to examine whether the search strategies used by authors in each meta-analysis did, in fact, identify the articles indexed in MEDLINE as most included meta-analyses did not provide reproducible search strategies.” Because of this, the conclusions of this study must be viewed with caution. If the searches conducted by the authors did not locate the studies in MEDLINE, the fact that the studies could have theoretically been located in MEDLINE is irrelevant. Finding results in MEDLINE is largely due to the ability of the searcher, the sensitivity of the search design, and the skill of the indexer.Wieland LS, 2012 Suarez-Almazor ME, 2000 Golder S, 2014 O'Leary N, 2007 Searching for known items to assess database utility in systematic reviews has been previously done (see, for example, Gehanno JF, 2013), but it has been critiqued due to the lack of search strategy assessment.Boeker M, 2013 Giustini D, 2013

      The authors, using an uncited search strategy in an unspecified version of MEDLINE on the Ovid SP platform they state had been “a previously published peer-reviewed search,” indeed can only find 92% of the included articles, versus the 94% available in the database. Unfortunately, there is little reason to suppose that the authors of systematic review articles can be expected to conduct a “reasonable, peer-reviewed search strategy.” In fact, researchers have repeatedly shown that even fully reproducible reported search strategies often have fatal errors and major omissions in search terms and controlled vocabulary.Sampson M, 2006 Rethlefsen ML, 2015 Koffel JB, 2016 Golder S, 2008 Though working with a librarian or information specialist is recommended as a way to enhance search strategy quality, studies have shown that certain disciplines never work with librarians on their systematic reviews Koffel JB, 2016, and those disciplines where it is more common still only work with librarians about a third of the time.Rethlefsen ML, 2015 Tools like PRESS were developed to improve search strategies McGowan J, 2016, but search peer review is rarely done. Rethlefsen ML, 2015

      The authors also state that, “searching fewer databases in addition to MEDLINE will result in substantively less literature to screen.” This has not been shown by this study. The authors do not report on the number of articles retrieved by their search or any of the searches undertaken in the 16 meta-analyses they evaluate. Furthermore, no data on precision, recall, or number needed to read was given for either their search or for the meta-analyses. These data could be reconstructed and would give readers concrete information to make this case. That would be particularly helpful in light of the information provided about the number and names of the databases searched. Other studies looking at database performance for systematic reviews have included precision and recall information for the original search strategies and/or from the reported found items. Preston L, 2015 Bramer WM, 2013 Bramer WM, 2016 These studies have largely found that searching multiple databases is of benefit.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2016 Oct 21, Melissa Rethlefsen commented:

      The authors are asking an important question—which database(s) should be searched in a systematic review? Current guidance from the Cochrane Collaboration, the Institute of Medicine, and most information retrieval specialists suggests that searching multiple databases is a necessity for a comprehensive search of the literature, but searching multiple databases can be time-consuming and may result in more citations than are manageable to review. In this paper, the authors posit that searching MEDLINE alone would be sufficient to locate relevant studies when conducting systematic reviews with meta-analysis on depression screening tools.

      Though the authors’ methodology is detailed, one limitation noted in the paper was noted as, “we were unable to examine whether the search strategies used by authors in each meta-analysis did, in fact, identify the articles indexed in MEDLINE as most included meta-analyses did not provide reproducible search strategies.” Because of this, the conclusions of this study must be viewed with caution. If the searches conducted by the authors did not locate the studies in MEDLINE, the fact that the studies could have theoretically been located in MEDLINE is irrelevant. Finding results in MEDLINE is largely due to the ability of the searcher, the sensitivity of the search design, and the skill of the indexer.Wieland LS, 2012 Suarez-Almazor ME, 2000 Golder S, 2014 O'Leary N, 2007 Searching for known items to assess database utility in systematic reviews has been previously done (see, for example, Gehanno JF, 2013), but it has been critiqued due to the lack of search strategy assessment.Boeker M, 2013 Giustini D, 2013

      The authors, using an uncited search strategy in an unspecified version of MEDLINE on the Ovid SP platform they state had been “a previously published peer-reviewed search,” indeed can only find 92% of the included articles, versus the 94% available in the database. Unfortunately, there is little reason to suppose that the authors of systematic review articles can be expected to conduct a “reasonable, peer-reviewed search strategy.” In fact, researchers have repeatedly shown that even fully reproducible reported search strategies often have fatal errors and major omissions in search terms and controlled vocabulary.Sampson M, 2006 Rethlefsen ML, 2015 Koffel JB, 2016 Golder S, 2008 Though working with a librarian or information specialist is recommended as a way to enhance search strategy quality, studies have shown that certain disciplines never work with librarians on their systematic reviews Koffel JB, 2016, and those disciplines where it is more common still only work with librarians about a third of the time.Rethlefsen ML, 2015 Tools like PRESS were developed to improve search strategies McGowan J, 2016, but search peer review is rarely done. Rethlefsen ML, 2015

      The authors also state that, “searching fewer databases in addition to MEDLINE will result in substantively less literature to screen.” This has not been shown by this study. The authors do not report on the number of articles retrieved by their search or any of the searches undertaken in the 16 meta-analyses they evaluate. Furthermore, no data on precision, recall, or number needed to read was given for either their search or for the meta-analyses. These data could be reconstructed and would give readers concrete information to make this case. That would be particularly helpful in light of the information provided about the number and names of the databases searched. Other studies looking at database performance for systematic reviews have included precision and recall information for the original search strategies and/or from the reported found items. Preston L, 2015 Bramer WM, 2013 Bramer WM, 2016 These studies have largely found that searching multiple databases is of benefit.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Oct 26, Brett D Thombs commented:

      We thank Ms. Rethlefson for her comments on our study. We agree with her about the importance of working with a skilled information specialist or librarian on all systematic reviews and that, as she notes, the quality of searches is often very poor in systematic reviews. She has correctly noted some of the limitations in our study, as we did in the study itself. We do not share Ms. Rethlefson’s concern with our use of what she refers to as an “uncited search strategy in an unspecified version of MEDLINE on the Ovid SP platform.” The full peer-reviewed search strategy that we used is provided in the appendix of the systematic review protocol that we cited (1). Ms. Rethlefson seems to criticize this approach because it “can only find 92% of the included articles, versus the 94% available in the database.” Systematic reviews are done for different purposes, and there is always a balance between resource consumption and completeness. In many cases, identifying 94% of all test accuracy evidence will be sufficient, and, in those cases, identifying 92% is not substantively different. Ms. Rethlefson questioned whether searching only MEDLINE would indeed reduce the number of citations and the burden in evaluating them. She is correct that we did not assess that. However, based on our initial search (not including updates) for studies of the diagnostic test accuracy of the Patient Health Questionnaire (1), using MEDLINE only would have cut the total number of citations to process from 6449 to 1389 (22%) compared to searching MEDLINE, PsycINFO, and Web of Science. Thus, it does seem evident, that in this area of research, using such a strategy would have a significant impact on resource use. Whether or not it is the right choice depends on the specific purposes of the systematic review and would be conditional on using a well-designed, peer-reviewed search.

      (1) Thombs BD, Benedetti A, Kloda LA, et al. The diagnostic accuracy of the Patient Health Questionnaire-2 (PHQ-2), Patient Health Questionnaire-8 (PHQ-8), and Patient Health Questionnaire-9 (PHQ-9) for detecting major depression: protocol for a systematic review and individual patient data meta-analyses. Systematic Reviews. 2014;3:124.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Nov 01, Melissa Rethlefsen commented:

      I thank Dr. Thombs for his responses, particularly for pointing out the location of the search strategy in the appendix of Thombs BD, 2014. I am still uncertain whether the search strategies in question were the ones used to validate whether the primary studies would be retrieved ("In addition, for all studies listed in MEDLINE, we checked whether the study would be retrieved using a previously published peer-reviewed search [9].") for two reasons: 1) The cited study (Sampson M, 2011, about the method of validation) does not include the search strategy Dr. Thombs notes below, though the strategy is cited previously when the search to identify meta-analyses meeting the inclusion criteria was discussed, and 2) The search strategy in Thombs BD, 2014 is very specific to the "Patient Health Questionnaire." Was this search strategy modified to include other instruments? Or was the Patient Health Questionnaire the only depression screening tool in this project? It appeared as though other scales were included, such as the Geriatric Depression Scale and the Hospital Anxiety and Depression Scale, hence my confusion.

      I greatly appreciate the information about the reduction in the number of citations to examine; this is indeed highly beneficial information. I am curious whether the high number of citations came from primarily the inclusion of one or more Web of Science databases? Looking at the Thombs BD, 2014 appendix, multiple databases (SCI-EXPANDED, SSCI, A&HCI, CPCI-S, CPCI-SSH) were searched in the Web of Science platform. Were one or more of those a big contributor to extra citations retrieved?

      Though Dr. Thombs and his colleagues make excellent points about the need to maximize resources at the expense of completeness, which I fully agree with, my concern is that studies which do post-hoc analysis of database contributions to systematic reviews lead those without information retrieval expertise to believe that searching one database is comprehensive, when in fact, the skill of the searcher is the primary factor in recall and precision. Most systematic review teams do not have librarians or information specialists, much less ones with the skill and experience of Dr. Kloda. I appreciate that Dr. Thombs acknowledges the importance of including information specialists or librarians on systematic review teams, and I agree with him that the use of previously published, validated searches is a highly promising method for reducing resource consumption in systematic reviews.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.