On 2016 Feb 16, Lydia Maniatis commented:
Another example of the problem discussed by Teller can be found in Pelli (1990), who is proposing a model of perception:
"In order to make the model as general as possible, yet still be able to measure its parameters, we need threeassumptions,or constraints. First we assume that the observer's level of performance increases monotonically with the contrast of the signal (when everything else is fixed). This guarantees that there will be a unique threshold. Secondly, as indicated on the diagram by the prefix contrast-invariant we assume that the calculation performed is independent of the contrast of the effective stimulus, which is its immedi- ate input. Together, assumptions 1 and 2 are a linking hypothesis. They imply that the observer's squared contrast threshold (which we can measure) is pro- portional to the effective noise level (which is inaccessible). Thirdly, we assume that the equivalent input noise is independent of the amplitude of the input noise and signal, or at least that it too is contrast- invariant, independent of the contrast of the effective image. These assumptions allow us to use two threshold measurements at different external noise levels to estimate the equivalent noise level. In effect, the assumptions state that the proportionality con- stant and the equivalent noise Neq are indeed constant, independent of the contrast of the effective image. These three assumptions are just enough to allow us to make psychophysical measurements that uniquely determine the parameters of our black-box model. Our model makes several testable predictions, as will be discussed below."
It's worth noting that the main function of the rationale seems to be one of practical convenience.
Theoretically, Pelli (1990) is proposing to make the case that: "the idea of equivalent input noise and a simplify- ing assumption called 'contrast invariance' allow the observer's overall quantum efficiency (as defined by Barlow, 1962a) to be factored into two components: transduction efficiency (called 'quantum efficiency of the eye' by Rose, 1948) and calculation efficiency..."
Although he claims his model makes testable predictions, he also states that they had not, as of publication been tested.
Pelli and Farrell (1999) seem to be referencing the untested, two-component model when they state that: "it is not widely appreciated that visual sensitivity is a product of two factors. By measuring an additional threshold, on a background of visual noise, one can partition visual sensitivity into two compo- nents representing the observer鳠efficiency and equivalent noise. Although they require an extra threshold measurement, these factors turn out to be invariant with respect to many visual parameters and are thus more easily characterized and understood than their product, the traditional contrast threshold."
No references are provided to suggest the proposal has been corroborated. This problem is not remedied by the subsequent statement that: "Previous authors have presented compelling theoreti- cal reasons for isolating these two quantities in order to understand particular aspects of visual function (refs. 3 㠱3)," since all of the references predate the Pelli (1990) claims. Conveniently, the authors "ignore theory, to focus on the empirical properties of the two factors, especially their remarkable invari- ances, which make them more useful than sensitivity"). Ignoring theory unfortunately seems to be a hallmark of modern vision science.
In this way, Pelli papers over a theoretical vacuum via technical elaboration of an untested model.
Both Pelli (1990) and Pelli and Farrell (1999) are referenced by more recent papers as a support for the use of the "Equivalent noise" model.
Pelli (1990) is cited by Solomon, May & Tyler (2016), without a further rationale for adopting the model (I've commented on that article here: https://pubpeer.com/publications/62E7CB814BC0299FBD4726BE07EA69).
Dakin, Bex, Cass and Watt (2009) cite Pelli and Farrell (1999), their rationale being that the model "has been widely used elsewhere."
I feel inclined to describe that what is going on here (it is not uncommon) as a kind of "theory-laundering," where ideas are proposed uncritically, then uncritically repeated, then become popular, and their popularity acts as a substitute for the missing rationale. Is this science?
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.