2 Matching Annotations
  1. Jul 2018
    1. On 2015 Nov 12, Lydia Maniatis commented:

      This article confirms that lightness within a subregion of a visual image is roughly based on a ratio rule comparing a target surface luminance with the luminances of surrounding surfaces. It exploits previous researchers' insights on transparency to apply, post hoc, a manipulation that gives a reasonable fit to their data. There is nothing new here.

      In addition to data-fitting based on previously known principles, the authors go through the motions of comparing different models of lightness; but this is only pro forma, since, as they explain, they really had no idea how to apply these alternative (and very incomplete) “models:” “Again, an interpretation of the predictions based on anchoring and edge integration theory is difficult because we simply utilized numerical parameters that were derived with very different experimental stimuli.” (A third model was completely unrealistic). Why bother with these pseudo-comparisons if model application is crude and results uninterpretable?

      The authors also make a confession: “The important piece of information that is still missing, and which we secretly inserted, was the knowledge about regions of different contrast range. Here we simply used the values that we knew to originate from the checks within the regions corresponding to plain view, shadow, or transparent media, but for a model to be applicable to any image this segmentation step still needs to be elucidated.”

      It is an amazing fact that contemporary lightness researchers are happily going about their business, ignoring the key factor mediating lightness perception, i.e. the factor of shape. The principles guiding the segmentation of the field into regions of figure and ground – whether that figure is opaque, transparent, shadowy, cloudy, etc – have been largely elucidated. Yet, these researchers prefer either “sneak it in” or to use stimuli (like checkerboards) in which it can be ignored (but not really – see discussion on Radonjic and Gilchrist( 2014)). To the extent that understanding of perceptual organisation is incomplete, it will limit progress in lightness – the problem it cannot be circumvented by ignoring structure.

      Unfortunately, the authors don't seem to appreciate this: “It remains a task for future experiments to put the normalized contrast computation under scrutiny by systematically manipulating the contrast range in different regions of illumination and testing the effects of varying the luminance (image) or the reflectance contrast (real world surfaces).” You can manipulate the “contrast range” and “reflectance contrast” ad infinitum, but if you don't explicitly take structure into account – if you keep having to “sneak it in” to fit each individual stimulus, you will get nowhere.

      Like other lightness/color researchers (e.g. Radonjic, Cottaris, Brainard (2015); see comments) Maertens and Zeiner (2014) use very vague descriptors for their stimuli: “relatively naturalistic;” “stimuli of moderate photometric and geometric complexity;” “a relatively naturalistic luminance range.” The possible stimulus characteristics to which such terms could apply are infinite, and thus they are uninformative and cannot constitute the premises of a serious experimental study. The questions and the methods of a study are closely linked; if the method is vague and ambiguous, so is the question; and if the question is vague and ambiguous, so are the conclusions bound to be. Perception research was not always like this, and doesn't need to be.

      (Needless to say, the technobabble is not worth bothering with).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2015 Nov 12, Lydia Maniatis commented:

      This article confirms that lightness within a subregion of a visual image is roughly based on a ratio rule comparing a target surface luminance with the luminances of surrounding surfaces. It exploits previous researchers' insights on transparency to apply, post hoc, a manipulation that gives a reasonable fit to their data. There is nothing new here.

      In addition to data-fitting based on previously known principles, the authors go through the motions of comparing different models of lightness; but this is only pro forma, since, as they explain, they really had no idea how to apply these alternative (and very incomplete) “models:” “Again, an interpretation of the predictions based on anchoring and edge integration theory is difficult because we simply utilized numerical parameters that were derived with very different experimental stimuli.” (A third model was completely unrealistic). Why bother with these pseudo-comparisons if model application is crude and results uninterpretable?

      The authors also make a confession: “The important piece of information that is still missing, and which we secretly inserted, was the knowledge about regions of different contrast range. Here we simply used the values that we knew to originate from the checks within the regions corresponding to plain view, shadow, or transparent media, but for a model to be applicable to any image this segmentation step still needs to be elucidated.”

      It is an amazing fact that contemporary lightness researchers are happily going about their business, ignoring the key factor mediating lightness perception, i.e. the factor of shape. The principles guiding the segmentation of the field into regions of figure and ground – whether that figure is opaque, transparent, shadowy, cloudy, etc – have been largely elucidated. Yet, these researchers prefer either “sneak it in” or to use stimuli (like checkerboards) in which it can be ignored (but not really – see discussion on Radonjic and Gilchrist( 2014)). To the extent that understanding of perceptual organisation is incomplete, it will limit progress in lightness – the problem it cannot be circumvented by ignoring structure.

      Unfortunately, the authors don't seem to appreciate this: “It remains a task for future experiments to put the normalized contrast computation under scrutiny by systematically manipulating the contrast range in different regions of illumination and testing the effects of varying the luminance (image) or the reflectance contrast (real world surfaces).” You can manipulate the “contrast range” and “reflectance contrast” ad infinitum, but if you don't explicitly take structure into account – if you keep having to “sneak it in” to fit each individual stimulus, you will get nowhere.

      Like other lightness/color researchers (e.g. Radonjic, Cottaris, Brainard (2015); see comments) Maertens and Zeiner (2014) use very vague descriptors for their stimuli: “relatively naturalistic;” “stimuli of moderate photometric and geometric complexity;” “a relatively naturalistic luminance range.” The possible stimulus characteristics to which such terms could apply are infinite, and thus they are uninformative and cannot constitute the premises of a serious experimental study. The questions and the methods of a study are closely linked; if the method is vague and ambiguous, so is the question; and if the question is vague and ambiguous, so are the conclusions bound to be. Perception research was not always like this, and doesn't need to be.

      (Needless to say, the technobabble is not worth bothering with).


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.