2 Matching Annotations
  1. Jul 2018
    1. On 2016 Dec 03, Lydia Maniatis commented:

      Could the authors please provide a citation(s) for the following introductory comments?

      "Over much of the dynamic range of human cone-mediated vision, light adaptation obeys Weber's law. Raw light intensity is transformed into a neural response that is proportional to contrast...where ϕW is the physiological response to a flash of intensity ΔI, and I is the light level to which the system is preadapted. Put another way, the cone visual system takes the physical flash intensity ΔI as input and applies to this input the multiplicative Weber gain factor to produce the neural response (Equation 1). This transformation begins in the cones themselves and is well suited to support color constancy when the illumination level varies."

      Does this statement, assuming relevant though missing citations, apply in general, or is it a description of results collected under very narrow and special conditions, and if so, what are they?

      As in many psychophysical studies, a very small number of subjects included an author. In experiments 1 and 2, one of two observers was an author. Why isn't this considered a problem with respect to bias?

      Also similar to many other psychophysical papers, the "hypothesis" being tested is the tip of a bundle of casually-made, rather complex, rather vague, undefended assumptions which the experiments do not, in fact, test. For example:

      1. "As our working hypothesis, we assume that the observer’s signal-to-noise ratio for discriminating trials in which an adapting field is presented alone from trials with a superimposed small, brief flash is [equation].

      2. "The assumption that visual sensitivity is limited by such multiplied Poisson noise has been previously proposed (Reeves, Wu, & Schirillo, 1998) as an explanation of why visual sensitivity is less than would be expected if threshold was limited by the photon fluctuations from the adapting field (Denton & Pirenne, 1954; Graham & Hood, 1992)."

      I note that the mere fact that Reeves, Wu and Schirillo proposed an assumption does not amount to an argument.

      Roughly, what researchers are doing is similar to this:

      Let's assume that how quickly a substance burns is a function of the amount of (assumed) phlogiston (possessing a number of assumed characteristics) it contains. So I burn substance "a", and I burn substance "b", and conclude that, since the former burns faster than the latter, it also contains more assumed phlogiston having the assumed characteristics. The phlogiston assumptions (and the authors here bundle together layers of assumptions) get a free ride, and they shouldn't. The title of this paper is tantamount to "Substance "a" contains more phlogiston than substance "b." It can only be valid if all of the underlying assumptions based on which the data was interpreted are valid, and that's unknown at best. We can even make the predictions a little more specific, and thus appear to test among competing models (which I think is actually what is going on here). For example, one model might predict a faster burn function than another, allowing us to "decide" between two different phlogiston models neither of which will actually have been tested. (Helping to avoid this type of fruitless diversion is what Popper's epistemology was designed to accomplish.)

      Also, it seems odd for the authors to be testing a tentative theory from the 1940's, which was clearly premature and inadequate, and apparently choosing to test a less-informed version of it:

      "In presenting the theory in this way, we have adhered more closely to the original presentation of Rose—an engineer who was interested in both biological and machine vision—than to that of de Vries, who was a physiologist and who introduced supplementary assumptions about the spatiotemporal summation parameters in human rod vision. We adopt Rose’s approach because the relevant neural parameters are still not well understood, and we wish to clearly distinguish between the absolute limits on threshold set by physics and the still incompletely understood neural mechanisms."

      In addition, the authors seem to have adopted the attitude that various selected contents of perception can be directly correlated with the activity of cells at any chosen level of the visual system (even when the neural parameters are still not well understood!), and that the rest of the activity leading to the conscious percept can be ignored, and that percepts that can't be directly correlated with the activity of the chosen cells can be ignored, via casual assumptions such as N. Graham's "the brain becomes transparent" under certain conditions.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2016 Dec 03, Lydia Maniatis commented:

      Could the authors please provide a citation(s) for the following introductory comments?

      "Over much of the dynamic range of human cone-mediated vision, light adaptation obeys Weber's law. Raw light intensity is transformed into a neural response that is proportional to contrast...where ϕW is the physiological response to a flash of intensity ΔI, and I is the light level to which the system is preadapted. Put another way, the cone visual system takes the physical flash intensity ΔI as input and applies to this input the multiplicative Weber gain factor to produce the neural response (Equation 1). This transformation begins in the cones themselves and is well suited to support color constancy when the illumination level varies."

      Does this statement, assuming relevant though missing citations, apply in general, or is it a description of results collected under very narrow and special conditions, and if so, what are they?

      As in many psychophysical studies, a very small number of subjects included an author. In experiments 1 and 2, one of two observers was an author. Why isn't this considered a problem with respect to bias?

      Also similar to many other psychophysical papers, the "hypothesis" being tested is the tip of a bundle of casually-made, rather complex, rather vague, undefended assumptions which the experiments do not, in fact, test. For example:

      1. "As our working hypothesis, we assume that the observer’s signal-to-noise ratio for discriminating trials in which an adapting field is presented alone from trials with a superimposed small, brief flash is [equation].

      2. "The assumption that visual sensitivity is limited by such multiplied Poisson noise has been previously proposed (Reeves, Wu, & Schirillo, 1998) as an explanation of why visual sensitivity is less than would be expected if threshold was limited by the photon fluctuations from the adapting field (Denton & Pirenne, 1954; Graham & Hood, 1992)."

      I note that the mere fact that Reeves, Wu and Schirillo proposed an assumption does not amount to an argument.

      Roughly, what researchers are doing is similar to this:

      Let's assume that how quickly a substance burns is a function of the amount of (assumed) phlogiston (possessing a number of assumed characteristics) it contains. So I burn substance "a", and I burn substance "b", and conclude that, since the former burns faster than the latter, it also contains more assumed phlogiston having the assumed characteristics. The phlogiston assumptions (and the authors here bundle together layers of assumptions) get a free ride, and they shouldn't. The title of this paper is tantamount to "Substance "a" contains more phlogiston than substance "b." It can only be valid if all of the underlying assumptions based on which the data was interpreted are valid, and that's unknown at best. We can even make the predictions a little more specific, and thus appear to test among competing models (which I think is actually what is going on here). For example, one model might predict a faster burn function than another, allowing us to "decide" between two different phlogiston models neither of which will actually have been tested. (Helping to avoid this type of fruitless diversion is what Popper's epistemology was designed to accomplish.)

      Also, it seems odd for the authors to be testing a tentative theory from the 1940's, which was clearly premature and inadequate, and apparently choosing to test a less-informed version of it:

      "In presenting the theory in this way, we have adhered more closely to the original presentation of Rose—an engineer who was interested in both biological and machine vision—than to that of de Vries, who was a physiologist and who introduced supplementary assumptions about the spatiotemporal summation parameters in human rod vision. We adopt Rose’s approach because the relevant neural parameters are still not well understood, and we wish to clearly distinguish between the absolute limits on threshold set by physics and the still incompletely understood neural mechanisms."

      In addition, the authors seem to have adopted the attitude that various selected contents of perception can be directly correlated with the activity of cells at any chosen level of the visual system (even when the neural parameters are still not well understood!), and that the rest of the activity leading to the conscious percept can be ignored, and that percepts that can't be directly correlated with the activity of the chosen cells can be ignored, via casual assumptions such as N. Graham's "the brain becomes transparent" under certain conditions.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.