4 Matching Annotations
  1. Jul 2018
    1. On 2015 Nov 02, Lydia Maniatis commented:

      In this review, Kingdom refers to the “unrelenting controversy” supposedly raging in the study of lightness/brightness/transparency: “Divided into different camps, each with its own preferred stimuli, methodology and theory, the study of LBT is sometimes more reminiscent of the social sciences with its deep ideological divides than it is of the neurosciences.” This quote makes immediately clear that the prevailing controversies reflect, not an intellectually competitive environment, but a highly permissive one of ad hoc hypotheses, each remaining safely within the limits of the stimuli and methodology that will corroborate it, again and again. Beyond these limits, the hypotheses are typically either untestable or easily falsifiable. Yet they remain in good standing for many years, part of the pseudo-controversy, generating endlessly repetitive and poorly-rationalized editorial and experimental publications.

      Kingdom's review is characteristic of this permissive approach. Despite claiming to “critically analyze” theoretical approaches, everyone essentially gets a free pass. One example is described in the asterisked footnote (below). A second example relates to Kingdom's discussion of “edge-integration models.” We learn that Rudd et al have managed to “quantitatively model assimilation, contrast and edge-integration data.” In other words, they have constructed ad hoc accounts of some data. However, while such modeling may be possible for simple stimuli, for “complex two- dimensional images the process is computationally expensive and, one cannot help feel, physiologically implausible.” But the researchers are “keenly aware of this limitation” and anticipate improving their model to, perhaps, attain plausibility.

      Of course, there is nothing wrong with working hard to validly articulate and test a hunch. But until it is testable and tested, it cannot be considered a competitor in good standing. The prevailing standard of “partial success” is a standard of failure. Controversy among proposals that are half-baked, untestable and/or fail as soon as they leave their comfort zone is of marginal scientific interest.

      • An example that I am particularly familiar with is the “anchoring theory.” I recently published a “simple test” of fundamental claims/assumptions of this construct – and it failed. The test was easy to conceive – trivial, actually - and the outcome highly predictable. I learned after the fact that at least one other investigator had, a while back, considered publishing something to the same effect. Despite this clear falsification of fundamental (if vague) assumptions (as well as previous falsifications), the failed “anchoring theory” is continuing to chalk up “successes.”


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Oct 05, Lydia Maniatis commented:

      In this publication, as in both older and more recent ones by various authors, "brightness" is being described as as the perceptual correlate of luminance, and is supposed to be interchangeable with lightness when there are no visible illumination boundaries. But as I've noted in comments on Blakeslee and McCourt (2015) and Gilchrist (2015), we can't say that there is a perceptual correlate of luminance, even under (apparently) homogeneous illumination, and this can be proved as follows:

      We ask an observer to report on the lightness of a set of surfaces which don't produce the impression of shadows or transparency. Then, in a second session, we present the same set of surfaces under a different level of illumination. The lightness reports for the surfaces will stay essentially the same, even though their luminances may have changed substantially. So to say that people are making "brightness" judgments - i.e. perceiving luminance - in either the first or the second or in any case doesn't seem to fit the facts.

      In the case of non-homogeneous illumination and double-layers, the position still doesn't make sense, because it implies that we see a surface plus a shadow/transparency, plus a third thing. Is this the case? And how is the value of this third percept supposed to be determined? On the basis of absolute luminance?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2015 Oct 05, Lydia Maniatis commented:

      In this publication, as in both older and more recent ones by various authors, "brightness" is being described as as the perceptual correlate of luminance, and is supposed to be interchangeable with lightness when there are no visible illumination boundaries. But as I've noted in comments on Blakeslee and McCourt (2015) and Gilchrist (2015), we can't say that there is a perceptual correlate of luminance, even under (apparently) homogeneous illumination, and this can be proved as follows:

      We ask an observer to report on the lightness of a set of surfaces which don't produce the impression of shadows or transparency. Then, in a second session, we present the same set of surfaces under a different level of illumination. The lightness reports for the surfaces will stay essentially the same, even though their luminances may have changed substantially. So to say that people are making "brightness" judgments - i.e. perceiving luminance - in either the first or the second or in any case doesn't seem to fit the facts.

      In the case of non-homogeneous illumination and double-layers, the position still doesn't make sense, because it implies that we see a surface plus a shadow/transparency, plus a third thing. Is this the case? And how is the value of this third percept supposed to be determined? On the basis of absolute luminance?


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2015 Nov 02, Lydia Maniatis commented:

      In this review, Kingdom refers to the “unrelenting controversy” supposedly raging in the study of lightness/brightness/transparency: “Divided into different camps, each with its own preferred stimuli, methodology and theory, the study of LBT is sometimes more reminiscent of the social sciences with its deep ideological divides than it is of the neurosciences.” This quote makes immediately clear that the prevailing controversies reflect, not an intellectually competitive environment, but a highly permissive one of ad hoc hypotheses, each remaining safely within the limits of the stimuli and methodology that will corroborate it, again and again. Beyond these limits, the hypotheses are typically either untestable or easily falsifiable. Yet they remain in good standing for many years, part of the pseudo-controversy, generating endlessly repetitive and poorly-rationalized editorial and experimental publications.

      Kingdom's review is characteristic of this permissive approach. Despite claiming to “critically analyze” theoretical approaches, everyone essentially gets a free pass. One example is described in the asterisked footnote (below). A second example relates to Kingdom's discussion of “edge-integration models.” We learn that Rudd et al have managed to “quantitatively model assimilation, contrast and edge-integration data.” In other words, they have constructed ad hoc accounts of some data. However, while such modeling may be possible for simple stimuli, for “complex two- dimensional images the process is computationally expensive and, one cannot help feel, physiologically implausible.” But the researchers are “keenly aware of this limitation” and anticipate improving their model to, perhaps, attain plausibility.

      Of course, there is nothing wrong with working hard to validly articulate and test a hunch. But until it is testable and tested, it cannot be considered a competitor in good standing. The prevailing standard of “partial success” is a standard of failure. Controversy among proposals that are half-baked, untestable and/or fail as soon as they leave their comfort zone is of marginal scientific interest.

      • An example that I am particularly familiar with is the “anchoring theory.” I recently published a “simple test” of fundamental claims/assumptions of this construct – and it failed. The test was easy to conceive – trivial, actually - and the outcome highly predictable. I learned after the fact that at least one other investigator had, a while back, considered publishing something to the same effect. Despite this clear falsification of fundamental (if vague) assumptions (as well as previous falsifications), the failed “anchoring theory” is continuing to chalk up “successes.”


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.