8 Matching Annotations
  1. Jul 2018
    1. On 2016 Nov 10, Lydia Maniatis commented:

      The inexplicable insistence on a logically and empirically invalid elementaristic approach is reflected in Gheorghiu et al’s description of color as a “low-“level feature” (of the proximal, distal, perceptual ? – stimulus): “Together these studies suggest that while symmetry mechanisms are sensitive to across-the-symmetry-midline correlations in low-level features such as color…” I’ve already discussed the problem with describing symmetry (or asymmetry) as a collection of correlations; here, the point has to do with color. Color, as we know, is not a feature of the distal stimulus, or of the proximal stimulus, it is a feature and a product of perceptual processes. As we (visual perception types) know, furthermore, there is no unique collection of wavelengths associated with the perception of a given color. How a local patch will look is wholly dependent on the structure and interpretation (via process) of the surrounding patches as well as that particular one. A patch reflecting the full spectrum of wavelengths in the visible spectrum can appear any color we want, because of the possibility of perceiving transparency and double layers of color. (Google/Image Purves cubes to see an example). So, although the term “low-level” is rather vague, it is clear that the perceived color of a patch in the visual field is the result of the highest level of perceptual processes, up to and including the process that produces consciousness. This type of fundamental confusion at the root of a research program should call that program into question.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Nov 08, Lydia Maniatis commented:

      Point 3

      A persistent question that I have is why is it acceptable, in the world of psychophysics, for authors to act as subjects, especially when the number of observers is typically so small, and when authors are careful to point out that non-author subjects were "naive."

      Quoting Gheorghiu et al: "Six observers participated in the experiments: the first author and five subjects who were naive with regard to the experimental aims."

      Indeed, in certain conditions, the lead author acted as one of only three, or even of only two, subjects:

      "For the number of blobs experiment three observers (EG, RA, CM) took part in all stimulus conditions and for the stimulus presentation duration experiment only two observers (EG and RA) participated."

      If naivete is important, then why is this acceptable? It seems like a straightforward question. Maybe there's a straigthforward answer, but I don't know where to find it.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Nov 07, Lydia Maniatis commented:

      The authors are asking questions of the nature: “How many types of phlogiston does wood contain,” comparing the results of burning wood in a variety of conditions, and interpreting them under the assumptions of “phlogiston detection theory.”

      The key point is that their major assumption or hypothesis - the existence of phlogiston, is never questioned or tested even though evidence and arguments against it are of long-standing. Here, ‘phlogiston’ is equivalent to “symmetry channels” and ‘assumptions of ‘phlogiston detection theory’ are equivalent to the assumptions of “probability summation of independent color-symmetry channels within the framework of signal-detection theory.”

      As noted in the earlier post, signal detection is a wholly inappropriate concept with respect to perception. But this doesn’t inhibit the study from proceeding, because logical problems are ignored and data is simply interpreted as though the “framework of sdt” were applicable.

      The basic logical problem is that the perception of a symmetrical form derives from the detection of local symmetry elements. These local elements supposed to instigate local signals, which are summed, and this sum mediates whether symmetry will or will not be perceived:

      “In the random-segregated condition the local symmetry signals would be additively combined into a single color-selective symmetry channel, producing a relatively large symmetry signal in that color channel and zero symmetry signal in the other color channels. In the non-segregated condition on the other hand, there would be symmetry information in all channels but the information in each channel would be much weaker…Probability summation across channels would result in an overall stronger signal in the random-segregated compared to non-segregated condition16. If there are no color-selective symmetry channels, then all color-symmetry signals will be pooled into one single channel.”

      How inappropriate the above quote is is easier to appreciate if we look at cases in which the physical and proximal configurations aren’t symmetrical, but the perceived configuration is. Take, for example, a picture of a parallelogram that looks like a slanted rectangle (as tends to be the case, e.g. in the three visible sides of the Necker cube). If the parallelogram is perceived as rectangular, then it looks symmetrical. This being the case, does it make sense to talk about “local symmetry signals” being summed up to produce the perceived symmetry? Isn’t the perception of the whole rectangle itself prior to and inextricably tied to the perception of its symmetry? If we are willing to invoke “local symmetry signals” then we could just as well invoke “local asymmetry signals,” since perceived asymmetry in a form is just as salient as symmetry - and just as dependent on prior organization. In perception (unlike in cognition), formal features such as symmetry are never disembodied; we never perceive “symmetry” as such, we perceive a symmetrical object. So, just as you can’t separate a shadow from the object that casts it, you can’t separate symmetry from the form that embodies it, and thus you can’t localize it.

      The logical problem is the same whether or not the distal or proximal stimuli are symmetrical. For a given pair of dots in Gheorghiu et al’s stimuli to be tagged as a“local symmetry signal,” they must already have been perceptually incorporated in a perceived shape. Symmetry will be a feature of that shape, as a whole. It is therefore redundant to say that we perceive symmetry by going back and summing up “local signals” from the particular pairs of points that are matched only because they are already players in a global shape percept. If we don’t assume this prior organization, then any pair of dots in the stimuli are eligible to be called “symmetry signals” simply by imagining an axis equidistant from both.

      In general, it isn’t reasonable or even intelligible to argue that any aspect of a shape, e.g. the triangularity of a triangle, is perceivable via a piecemeal process of detection of local “triangularity signals.” This was the fundamental observation of the Gestaltists; sadly, it has never sunk in.

      In a subsequent post I will discuss the problem with the two alternative forced choice method used here. This method forces observers to choose one of two options, even if neither of those matches their perceptual experience. Here, I want to point out that this experiment is set up in precisely the same way: Data are used to choose among alternatives, none of which reflect nature.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Nov 04, Lydia Maniatis commented:

      Preliminary to a more general critique of this study, whose casual approach to theory and method is unfortunately typical in the field of vision science, I would like to point out the conceptual confusion expressed in the first few sentences of the introductory remarks.

      Here, Gheorghiu et al (2016) state that "Symmetry is a ubiquitous feature in natural images, and is found in both biological and man-made objects...symmetry perception plays an important role in object recognition, [etc]."

      If by "natural images" the authors are referring either to the retinal projection or to any projection of the natural or even man-made world, the statement is incorrect. It will be rare that the projection of either a symmetrical or an asymmetrical object will be symmetrical in the projection. The authors are making what the Gestaltists used to call the "experience error," equating the properties of the products of perception with the properties of the proximal stimulus.

      Yes, the world contains may quasi-symmetrical objects, yes, man-made objects are, more often than not, symmetrical, and yes, we generally perceive physically symmetrical objects as symmetrical. But this not occurs not because the proximal stimulus mirrors this symmetry, but in spite of the fact that it does not.

      The misunderstanding vis a vis the properties of the physical source of the retinal projection vs the properties of the projection vs the properties of the percept runs deep and is fundamental to studies that, like this one, treat perception as a "signal detection" problem.

      When an observer says "I see symmetry" in this object or picture, this does not mean that the observer's retinal projection contains symmetrical figures (even if (and this is an insurmountable if) it were theoretically acceptable to treat the projection as being "pre-treated" by a figure-ground process that segregates and integrates photon-hits on the basis of the physical coherence of sources that reflected them).

      So in what sense is symmetry being "detected"? Only in the sense that the conscious observer is inspecting the products of perceptual processes that occur automatically, and are the only link between conscious experience and the outside world. Because of this, an observer may "detect" symmetry in a perceptual stimulus even if the source of that stimulus is, in fact, asymmetrical. For example, when we look at a Necker cube, we "detect" symmetry, even though the figure that reflected the light is not symmetrical. When it comes to perception, the "feature detector" concept is a non-starter, because it ignores the nature of the proximal stimulation and mixes up cause and effect.

      The fact that the authors use actually symmetrical figures as their experimental objects obscures this truth, of which they should be aware.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2016 Nov 04, Lydia Maniatis commented:

      Preliminary to a more general critique of this study, whose casual approach to theory and method is unfortunately typical in the field of vision science, I would like to point out the conceptual confusion expressed in the first few sentences of the introductory remarks.

      Here, Gheorghiu et al (2016) state that "Symmetry is a ubiquitous feature in natural images, and is found in both biological and man-made objects...symmetry perception plays an important role in object recognition, [etc]."

      If by "natural images" the authors are referring either to the retinal projection or to any projection of the natural or even man-made world, the statement is incorrect. It will be rare that the projection of either a symmetrical or an asymmetrical object will be symmetrical in the projection. The authors are making what the Gestaltists used to call the "experience error," equating the properties of the products of perception with the properties of the proximal stimulus.

      Yes, the world contains may quasi-symmetrical objects, yes, man-made objects are, more often than not, symmetrical, and yes, we generally perceive physically symmetrical objects as symmetrical. But this not occurs not because the proximal stimulus mirrors this symmetry, but in spite of the fact that it does not.

      The misunderstanding vis a vis the properties of the physical source of the retinal projection vs the properties of the projection vs the properties of the percept runs deep and is fundamental to studies that, like this one, treat perception as a "signal detection" problem.

      When an observer says "I see symmetry" in this object or picture, this does not mean that the observer's retinal projection contains symmetrical figures (even if (and this is an insurmountable if) it were theoretically acceptable to treat the projection as being "pre-treated" by a figure-ground process that segregates and integrates photon-hits on the basis of the physical coherence of sources that reflected them).

      So in what sense is symmetry being "detected"? Only in the sense that the conscious observer is inspecting the products of perceptual processes that occur automatically, and are the only link between conscious experience and the outside world. Because of this, an observer may "detect" symmetry in a perceptual stimulus even if the source of that stimulus is, in fact, asymmetrical. For example, when we look at a Necker cube, we "detect" symmetry, even though the figure that reflected the light is not symmetrical. When it comes to perception, the "feature detector" concept is a non-starter, because it ignores the nature of the proximal stimulation and mixes up cause and effect.

      The fact that the authors use actually symmetrical figures as their experimental objects obscures this truth, of which they should be aware.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Nov 07, Lydia Maniatis commented:

      The authors are asking questions of the nature: “How many types of phlogiston does wood contain,” comparing the results of burning wood in a variety of conditions, and interpreting them under the assumptions of “phlogiston detection theory.”

      The key point is that their major assumption or hypothesis - the existence of phlogiston, is never questioned or tested even though evidence and arguments against it are of long-standing. Here, ‘phlogiston’ is equivalent to “symmetry channels” and ‘assumptions of ‘phlogiston detection theory’ are equivalent to the assumptions of “probability summation of independent color-symmetry channels within the framework of signal-detection theory.”

      As noted in the earlier post, signal detection is a wholly inappropriate concept with respect to perception. But this doesn’t inhibit the study from proceeding, because logical problems are ignored and data is simply interpreted as though the “framework of sdt” were applicable.

      The basic logical problem is that the perception of a symmetrical form derives from the detection of local symmetry elements. These local elements supposed to instigate local signals, which are summed, and this sum mediates whether symmetry will or will not be perceived:

      “In the random-segregated condition the local symmetry signals would be additively combined into a single color-selective symmetry channel, producing a relatively large symmetry signal in that color channel and zero symmetry signal in the other color channels. In the non-segregated condition on the other hand, there would be symmetry information in all channels but the information in each channel would be much weaker…Probability summation across channels would result in an overall stronger signal in the random-segregated compared to non-segregated condition16. If there are no color-selective symmetry channels, then all color-symmetry signals will be pooled into one single channel.”

      How inappropriate the above quote is is easier to appreciate if we look at cases in which the physical and proximal configurations aren’t symmetrical, but the perceived configuration is. Take, for example, a picture of a parallelogram that looks like a slanted rectangle (as tends to be the case, e.g. in the three visible sides of the Necker cube). If the parallelogram is perceived as rectangular, then it looks symmetrical. This being the case, does it make sense to talk about “local symmetry signals” being summed up to produce the perceived symmetry? Isn’t the perception of the whole rectangle itself prior to and inextricably tied to the perception of its symmetry? If we are willing to invoke “local symmetry signals” then we could just as well invoke “local asymmetry signals,” since perceived asymmetry in a form is just as salient as symmetry - and just as dependent on prior organization. In perception (unlike in cognition), formal features such as symmetry are never disembodied; we never perceive “symmetry” as such, we perceive a symmetrical object. So, just as you can’t separate a shadow from the object that casts it, you can’t separate symmetry from the form that embodies it, and thus you can’t localize it.

      The logical problem is the same whether or not the distal or proximal stimuli are symmetrical. For a given pair of dots in Gheorghiu et al’s stimuli to be tagged as a“local symmetry signal,” they must already have been perceptually incorporated in a perceived shape. Symmetry will be a feature of that shape, as a whole. It is therefore redundant to say that we perceive symmetry by going back and summing up “local signals” from the particular pairs of points that are matched only because they are already players in a global shape percept. If we don’t assume this prior organization, then any pair of dots in the stimuli are eligible to be called “symmetry signals” simply by imagining an axis equidistant from both.

      In general, it isn’t reasonable or even intelligible to argue that any aspect of a shape, e.g. the triangularity of a triangle, is perceivable via a piecemeal process of detection of local “triangularity signals.” This was the fundamental observation of the Gestaltists; sadly, it has never sunk in.

      In a subsequent post I will discuss the problem with the two alternative forced choice method used here. This method forces observers to choose one of two options, even if neither of those matches their perceptual experience. Here, I want to point out that this experiment is set up in precisely the same way: Data are used to choose among alternatives, none of which reflect nature.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Nov 08, Lydia Maniatis commented:

      Point 3

      A persistent question that I have is why is it acceptable, in the world of psychophysics, for authors to act as subjects, especially when the number of observers is typically so small, and when authors are careful to point out that non-author subjects were "naive."

      Quoting Gheorghiu et al: "Six observers participated in the experiments: the first author and five subjects who were naive with regard to the experimental aims."

      Indeed, in certain conditions, the lead author acted as one of only three, or even of only two, subjects:

      "For the number of blobs experiment three observers (EG, RA, CM) took part in all stimulus conditions and for the stimulus presentation duration experiment only two observers (EG and RA) participated."

      If naivete is important, then why is this acceptable? It seems like a straightforward question. Maybe there's a straigthforward answer, but I don't know where to find it.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    4. On 2016 Nov 10, Lydia Maniatis commented:

      The inexplicable insistence on a logically and empirically invalid elementaristic approach is reflected in Gheorghiu et al’s description of color as a “low-“level feature” (of the proximal, distal, perceptual ? – stimulus): “Together these studies suggest that while symmetry mechanisms are sensitive to across-the-symmetry-midline correlations in low-level features such as color…” I’ve already discussed the problem with describing symmetry (or asymmetry) as a collection of correlations; here, the point has to do with color. Color, as we know, is not a feature of the distal stimulus, or of the proximal stimulus, it is a feature and a product of perceptual processes. As we (visual perception types) know, furthermore, there is no unique collection of wavelengths associated with the perception of a given color. How a local patch will look is wholly dependent on the structure and interpretation (via process) of the surrounding patches as well as that particular one. A patch reflecting the full spectrum of wavelengths in the visible spectrum can appear any color we want, because of the possibility of perceiving transparency and double layers of color. (Google/Image Purves cubes to see an example). So, although the term “low-level” is rather vague, it is clear that the perceived color of a patch in the visual field is the result of the highest level of perceptual processes, up to and including the process that produces consciousness. This type of fundamental confusion at the root of a research program should call that program into question.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.