3 Matching Annotations
  1. Jul 2018
    1. On 2014 Mar 25, Greg Finak commented:

      With manual gating still being the standard, we have to assume analysts doing the gating are experts, know the assay, and know what cell populations they're after. Starting from that premise, the normalization algorithm described does aim to emulate manual gating. I think that's pretty clearly described.

      However, your concern:

      if both datasets were gated by the same person, as seems likely. If that person is also the lead developer of the algorithm, a clear bias in favor of the algorithm becomes a real possibility.

      is unfounded. The the manual gating was not performed by the algorithm developers, nor were both data sets gated by the same person. The HVTN data set was gated by experts at the HIV Vaccine Trials Network, and the ITN was gated by experts at the Immune Tolerance Network.

      As to comparing manual gating generated by a sampling of individuals.. this has already been done.. see the FlowCAP paper, as well as a number of reviews by Maecker et. al. that show clearly that different analysts will give different results with higher C.V. than having data analyzed centrally. And if you're to have data analyzed centrally, then I'd argue you should use the best expertise possible. We started from that presumption here.

      Edit: I would also add that if you would like to scrutinize the manual gating more closely, you can grab the FlowJo workspaces from flowrepository.org, both data sets are now publicly accessible.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2014 Mar 23, Yannick Pouliot commented:

      One mild concern I have with this paper is the lack of details as to how the manual gating was performed. Given the authors' statement that a "... desirable outcome is to have low bias and low variability relative to the manual gates", what this really means is that the algorithm is trying to emulate the manual gating performance of a single individual if both datasets were gated by the same person, as seems likely. If that person is also the lead developer of the algorithm, a clear bias in favor of the algorithm becomes a real possibility.

      This sort of comparison would be much more informative if the algorithm were compared against manual gating results generated by a sampling of individuals with varying degrees of experience in the process and not involved in developing the algorithm.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2014 Mar 23, Yannick Pouliot commented:

      One mild concern I have with this paper is the lack of details as to how the manual gating was performed. Given the authors' statement that a "... desirable outcome is to have low bias and low variability relative to the manual gates", what this really means is that the algorithm is trying to emulate the manual gating performance of a single individual if both datasets were gated by the same person, as seems likely. If that person is also the lead developer of the algorithm, a clear bias in favor of the algorithm becomes a real possibility.

      This sort of comparison would be much more informative if the algorithm were compared against manual gating results generated by a sampling of individuals with varying degrees of experience in the process and not involved in developing the algorithm.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.