On 2015 Nov 02, Lydia Maniatis commented:
In this review, Kingdom refers to the “unrelenting controversy” supposedly raging in the study of lightness/brightness/transparency: “Divided into different camps, each with its own preferred stimuli, methodology and theory, the study of LBT is sometimes more reminiscent of the social sciences with its deep ideological divides than it is of the neurosciences.” This quote makes immediately clear that the prevailing controversies reflect, not an intellectually competitive environment, but a highly permissive one of ad hoc hypotheses, each remaining safely within the limits of the stimuli and methodology that will corroborate it, again and again. Beyond these limits, the hypotheses are typically either untestable or easily falsifiable. Yet they remain in good standing for many years, part of the pseudo-controversy, generating endlessly repetitive and poorly-rationalized editorial and experimental publications.
Kingdom's review is characteristic of this permissive approach. Despite claiming to “critically analyze” theoretical approaches, everyone essentially gets a free pass. One example is described in the asterisked footnote (below). A second example relates to Kingdom's discussion of “edge-integration models.” We learn that Rudd et al have managed to “quantitatively model assimilation, contrast and edge-integration data.” In other words, they have constructed ad hoc accounts of some data. However, while such modeling may be possible for simple stimuli, for “complex two- dimensional images the process is computationally expensive and, one cannot help feel, physiologically implausible.” But the researchers are “keenly aware of this limitation” and anticipate improving their model to, perhaps, attain plausibility.
Of course, there is nothing wrong with working hard to validly articulate and test a hunch. But until it is testable and tested, it cannot be considered a competitor in good standing. The prevailing standard of “partial success” is a standard of failure. Controversy among proposals that are half-baked, untestable and/or fail as soon as they leave their comfort zone is of marginal scientific interest.
- An example that I am particularly familiar with is the “anchoring theory.” I recently published a “simple test” of fundamental claims/assumptions of this construct – and it failed. The test was easy to conceive – trivial, actually - and the outcome highly predictable. I learned after the fact that at least one other investigator had, a while back, considered publishing something to the same effect. Despite this clear falsification of fundamental (if vague) assumptions (as well as previous falsifications), the failed “anchoring theory” is continuing to chalk up “successes.”
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.