Reviewer #1 (Public review):
Summary:
The authors define a new metric for visual displays, derived from psychophysical response times, called visual homogeneity (VH). They attempt to show that VH is explanatory of response times across multiple visual tasks. They use fMRI to find visual cortex regions with VH-correlated activity. On this basis, they declare a new visual region in human brain, area VH, whose purpose is to represent VH for the purpose of visual search and symmetry tasks.
Link to original review: https://elifesciences.org/reviewed-preprints/93033v2/reviews#peer-review-0
Comments on latest version:
Authors rebuttal: We agree that visual homogeneity is similar to existing concepts such as target saliency, memorability etc. We have proposed it as a separate concept because visual homogeneity has an independent empirical measure (the reciprocal of target-absent search time in oddball search, or the reciprocal of same response time in a same-different task, etc) that may or may not be the same as other empirical measures such as saliency and memorability. Investigating these possibilities is beyond the scope of our study but would be interesting for future work. We have now clarified this in the revised manuscript (Discussion, p. 42).
Reviewer response to rebuttal: Neither the original ms nor the comments on that ms pretended that "visual homogeneity" was entirely separate from target saliency etc. So this is a response to a criticism that was never made. What the authors do claim, and what the comments question, is that they have successfully subsumed long-recognized psychophysical concepts like target saliency etc. under a new, uber-concept, "visual homogeneity" that explains psychophysical experimental results in a more unified and satisfying way. This subsumption of several well-established psychophysical concepts under a new, unified category is what reviewers objected to.
Authors rebuttal: However, we'd like to emphasize that the question of whether visual homogeneity is novel or related to existing concepts misses entirely the key contribution of our study.
Reviewer response to rebuttal: Sorry, but the claim of a new uber-concept in psychophysics, "visual homogeneity", is a major claim of the paper. The fact that it is not the only claim made does not absolve the authors from having to prove it satisfactorily.
"Authors rebuttal: "In addition, the large regions of VH correlations identified in Experiments 1 and 2 vs. Experiments 3 and 4 are barely overlapping. This undermines the claim that VH is a universal quantity, represented in a newly discovered area of visual cortex, that underlies a wide variety of visual tasks and functions."<br />
• We respectfully disagree with your assertion. First of all, there is partial overlap between the VH regions, for which there are several other obvious explanations that must be considered first before dismissing VH outright as a flawed construct. We acknowledge these alternatives in the Results (p. 27), and the relevant text is reproduced below.
"We note that it is not straightforward to interpret the overlap between the VH regions identified in Experiments 2 & 4. The lack of overlap could be due to stimulus differences (natural images in Experiment 2 vs silhouettes in Experiment 4), visual field differences (items in the periphery in Experiment 2 vs items at the fovea in Experiment 4) and even due to different participants in the two experiments. There is evidence supporting all these possibilities: stimulus differences (Yue et al., 2014), visual field differences (Kravitz et al., 2013) as well as individual differences can all change the locus of neural activations in object-selective cortex (Weiner and Grill-Spector, 2012a; Glezer and Riesenhuber, 2013). We speculate that testing the same participants on search and symmetry tasks using similar stimuli and display properties would reveal even larger overlap in the VH regions that drive behavior."
Reviewer response to rebuttal: The authors are saying that their results merely look unconvincing (weak overlap between VH regions defined in different experiments) because there were confounding differences between their experiments, in subject population, stimuli, etc. That is possible, but in that case it is up to the authors to show that their definition of a new "area VH" is convincing when the confounding differences are resolved, e.g. by using the same stimuli in the different experiments they attempt to agglomerate here. That would require new experiments, and none are offered in this revision.
Authors rebuttal: • Thank you for carefully thinking through our logic. We agree that a distance-to-centre calculation is entirely unnecessary as an explanation for target-present visual search. The similarity between target and distractor, so there is nothing new to explain here. However, this is a narrow and selective interpretation of our findings because you are focusing only on our results on target-present searches, which are only half of all our data. The other half is the target-absent responses which previously have had no clear explanation. You are also missing the fact that we are explaining same-different and symmetry tasks as well using the same visual homogeneity computation. We urge you to think more deeply about the problem of how to decide whether an oddball is present or not in the first place. How do we actually solve this task?
Reviewer response to rebuttal: It is the role of the authors to think deeply about their paper and on that basis present a clear and compelling case that readers can understand quickly and agree with. That is not done here.
Authors rebuttal: There must be some underlying representation and decision process. Our study shows that a distance-to-centre computation can actually serve as a decision variable to solve disparate property-based visual tasks. These tasks pose a major challenge to standard models of decision-making because the underlying representation and decision variable have been unclear. Our study resolves this challenge by proposing a novel computation that can be used by the brain to solve all these disparate tasks, and bring these tasks into the ambit of standard theories of decision-making.
Reviewer response to rebuttal: There is only a "challenge" if you accept the authors' a priori assumption that all of these tasks must have a common explanation and rely on a single neural mechanism. I do not accept that assumption, and I don't think the authors provide evidence to support the assumption. There is nothing "unclear" about how search, oddball, etc. have been thoroughly explained, separately, in the psychophysical literature that spans more than a century.
Authors rebuttal: • You are indeed correct in noting that both Experiment 1 & 2 involve oddball search, and so at the superficial level, it looks circular that the oddball search data of Experiment 1 is being used to explain the oddball search data of Experiment 2.<br />
However a deeper scrutiny reveals more fundamental differences: Experiment 1 consisted of only oddball search with the target appearing on the left or right, whereas Experiment 2 consisted of oddball search with the target either present or completely absent. In fact, we were merely using the search dissimilarities from Experiment 1 to reconstruct the underlying object representation, because it is well-known that neural dissimilarities are predicted well by search dissimilarities (Sripati & Olson, 2009; Zhivago et al, 2014).
Reviewer response to rebuttal: Here again the authors cite differences between their multiple experiments as a virtue that supports their conclusions. Instead, the experiments should have been designed for maximum similarity if the authors intended to explain them with the same theory.
Authors rebuttal: To thoroughly refute any lingering concern about circularity, we reasoned that the model predictions for Experiment 2 could have been obtained by a distance-to-center computation on any brain like object representation. To this end, we used object representations from deep neural networks pretrained on object categorization, whose representations are known to match well with the brain, and asked if a distance-to-centre computation on these representations could predict the search data in Experiment 2. This was indeed the case, and these results are now included an additional section in Supplementary Material (Section S1).
Reviewer response to rebuttal: The authors' claims are about human performance and how it is based on the human brain. Their claims are not well supported by the human experiments that they performed. It serves no purpose to redo the same experiments in silico, which cannot provide stronger evidence that compensates for what was lacking in the human data.
Authors rebuttal: "Confirming the generality of visual homogeneity<br />
We performed several additional analyses to confirm the generality of our results, and to reject alternate explanations.
First, it could be argued that our results are circular because they involve taking oddball search times from Experiment 1 and using them to explain search response times in Experiment 2. This is a superficial concern since we are using the search dissimilarities from Experiment 1 only as a proxy for the underlying neural representation, based on previous reports that neural dissimilarities closely match oddball search dissimilarities (Sripati and Olson, 2010; Zhivago and Arun, 2014). Nonetheless, to thoroughly refute this possibility, we reasoned that we would get similar predictions of the target present/absent responses in Experiment using any other brain-like object representation. To confirm this, we replaced the object representations derived from Experiment 1 with object representations derived from deep neural networks pretrained for object categorization, and asked if distance-to-center computations could predict the target present/absent responses in Experiment 2. This was indeed the case (Section S1).
Second, we wondered whether the nonlinear optimization process of finding the best-fitting center could be yielding disparate optimal centres each time. To investigate this, we repeated the optimization procedure with many randomly initialized starting points, and obtained the same best-fitting center each time (see Methods).
Third, to confirm that the above model fits are not due to overfitting, we performed a leave-one-out cross validation analysis. We left out all target-present and target-absent searches involving a particular image, and then predicted these searches by calculating visual homogeneity estimated from all other images. This too yielded similar positive and negative correlations (r = 0.63, p < 0.0001 for target-present, r = -0.63, p < 0.001 for target-absent).
Fourth, if heterogeneous displays indeed elicit similar neural responses due to mixing, then their average distance to other objects must be related to their visual homogeneity. We confirmed that this was indeed the case, suggesting that the average distance of an object from all other objects in visual search can predict visual homogeneity (Section S1).
Fifth, the above results are based on taking the neural response to oddball arrays to be the average of the target and distractor responses. To confirm that averaging was indeed the optimal choice, we repeated the above analysis by assuming a range of relative weights between the target and distractor. The best correlation was obtained for almost equal weights in the lateral occipital (LO) region, consistent with averaging and its role in the underlying perceptual representation (Section S1).
Finally, we performed several additional experiments on a larger set of natural objects as well as on silhouette shapes. In all cases, present/absent responses were explained using visual homogeneity (Section S2)."
Reviewer response to rebuttal: The authors can experiment on side questions for as long as they please, but none of the results described above answer the concern about how center-fitting undercuts the evidentiary value of their main results.
Authors rebuttal: • While it is true that the optimal center needs to be found by fitting to the data, there no particular mystery to the algorithm: we are simply performing a standard gradient-descent to maximize the fit to the data. We have described the algorithm clearly and are making our codes public. We find the algorithm to yield stable optimal centers despite many randomly initialized starting points. We find the optimal center to be able to predict responses to entirely novel images that were excluded during model training. We are making no assumption about the location of centre with respect to individual points. Therefore, we see no cause for concern regarding the center-finding algorithm.
Reviewer response to rebuttal: The point of the original comment was that center-fitting should not be done in the first place because it introduces unknowable effects.
•Authors rebuttal: Most visual tasks, such as finding an animal, are thought to involve building a decision boundary on some underlying neural representation. Even visual search has been portrayed as a signal-detection problem where a particular target is to be discriminated from a distractor. However none of these formulations work in the case of property-based visual tasks, where there is no unique feature to look for.<br />
We are proposing that, when we view a search array, the neural response to the search array can be deduced from the neural responses to the individual elements using well-known rules, and that decisions about an oddball target being present or absent can be made by computing the distance of this neural response from some canonical mean firing rate of a population of neurons. This distance to center computation is what we denote as visual homogeneity. We have revised our manuscript throughout to make this clearer and we hope that this helps you understand the logic better.<br />
• You are absolutely correct that the stimulus complexity should matter, but there are no good empirically derived measures for stimulus complexity, other than subjective ratings which are complex on their own and could be based on any number of other cognitive and semantic factors. But considering what factors are correlated with target-absent response times is entirely different from asking what decision variable or template is being used by participants to solve the task.
Reviewer response to rebuttal: If stimulus complexity is what matters, as the authors agree here, then it is incumbent on them to measure stimulus complexity. The difficulty of measuring stimulus complexity does not justify avoiding the problem with an analysis that ignores complexity.
Authors rebuttal: • We have provided empirical proof for our claims, by showing that target-present response times in a visual search task are correlated with "different" responses in the same-different task, and that target-absent response times in the visual search task are correlated with "same" responses in the same-different task (Section S4).
Reviewer response to rebuttal: Sorry, but there is still no reason to think that same-different judgments are based on a mythical boundary halfway between the two. If there is a boundary, it will be close to the same end of the continuum, where subjects might conceivably miss some tiny difference between two stimuli. The vast majority of "different" stimuli will be entirely different from the same stimulus, producing no confusability, and certainly not a decision boundary halfway between two extremes.
Authors rebuttal: • Again, the opposite correlations between target present/absent search times with VH are the crucial empirical validation of our claims that a distance-to-center calculation explain how we perform these property-based tasks. The VH predictions do not fully explain the data. We have explicitly acknowledged this shortcoming, so we are hardly dismissing it as a problem.
Reviewer response to rebuttal: The authors' acknowledgement of flaws in the ms does not argue in favor of publication, but rather just the opposite.
Authors rebuttal: • Finding an oddball, deciding if two items are same or different and symmetry tasks are disparate visual tasks that do not fit neatly into standard models of decision-making. The key conceptual advance of our study is that we propose a plausible neural representation and decision variable that allows all three property-based visual tasks to be reconciled with standard models of decision-making.
Reviewer response to rebuttal: The original comment stands as written. Same/different will have a boundary very close to the "same" end of the continuum. The boundary is only halfway between two choices if the stimulus design forces the boundary to be there, as in the motion and cat/dog experiments.
Authors rebuttal: "There is no inherent middle point boundary between target present and target absent. Instead, in both types of trial, maximum information is present when target and distractors are most dissimilar, and minimum information is present when target and distractors are most similar. The point of greatest similarity occurs at then limit of any metric for similarity. Correspondingly, there is no middle point dip in information that would produce greater difficulty and higher response times. Instead, task difficulty and response times increase monotonically with similarity between targets and distractors, for both target present and target absent decisions. Thus, in Figs. 2F and 2G, response times appear to be highest for animals, which share the largest numbers of closely similar distractors."<br />
• Your alternative explanation rests on vague factors like "maximum information" which cannot be quantified. By contrast we are proposing a concrete, falsifiable model for three property-based tasks - same/different, oddball present/absent and object symmetry. Any argument based solely on item similarity to explain visual search or symmetry responses cannot explain systematic variations observed for target-absent arrays and for symmetric objects, for the reasons explained earlier.
Reviewer response to rebuttal: There is nothing vague about this comment. The authors use an analysis that assumes a decision boundary at the centerpoint of their arbitrarily defined stimulus space. This assumption is not supported, and it is unlikely, considering that subjects are likely to notice all but the smallest variations between same and different stimuli, putting the boundary nearly at the same end of the continuum, not the very middle.
Authors rebuttal: "(1) The area VH boundaries from different experiments are nearly completely non-overlapping.
In line with their theory that VH is a single continuum with a decision boundary somewhere in the middle, the authors use fMRI searchlight to find an area whose responses positively correlate with homogeneity, as calculated across all of their target present and target absent arrays. They report VH-correlated activity in regions anterior to LO. However, the VH defined by symmetry Experiments 3 and 4 (VHsymmetry) is substantially anterior to LO, while the VH defined by target detection Experiments 1 and 2 (VHdetection) is almost immediately adjacent to LO. Fig. S13 shows that VHsymmetry and VHdetection are nearly non-overlapping. This is a fundamental problem with the claim of discovering a new area that represents a new quantity that explains response times across multiple visual tasks. In addition, it is hard to understand why VHsymmetry does not show up in a straightforward subtraction between symmetric and asymmetric objects, which should show a clear difference in homogeneity."
• We respectfully disagree. The partial overlap between the VH regions identified in Experiments 1 & 2 can hardly be taken as evidence against the quantity VH itself, because there are several other obvious alternate explanations for this partial overlap, as summarized earlier as well. The VH region does show up in a straightforward subtraction between symmetric and asymmetric objects (Section S7), so we are not sure what the Reviewer is referring to here.
Reviewer response to rebuttal: In disagreeing with the comment quoted above, the authors are maintaining that a new functional area of cerebral cortex can be declared even if that area changes location on the cortical map from one experiment to another. That position is patently absurd.
Authors rebuttal: "(3) Definition of the boundaries and purpose of a new visual area in the brain requires circumspection, abundant and convergent evidence, and careful controls.
Even if the VH metric, as defined and calculated by the authors here, is a meaningful quantity, it is a bold claim that a large cortical area just anterior to LO is devoted to calculating this metric as its major task. Vision involves much more than target detection and symmetry detection. Cortex anterior to LO is bound to perform a much wider range of visual functionalities. If the reported correlations can be clarified and supported, it would be more circumspect to treat them as one byproduct of unknown visual processing in cortex anterior to LO, rather than treating them as the defining purpose for a large area of visual cortex."
• We totally agree with you that reporting a new brain region would require careful interpretation and abundant and converging evidence. However, this requires many studies worth of work, and historically category-selective regions like the FFA have achieved consensus only after they were replicated and confirmed across many studies. We believe our proposal for the computation of a quantity like visual homogeneity is conceptually novel, and our study represents a first step that provides some converging evidence (through replicable results across different experiments) for such a region. We have reworked our manuscript to make this point clearer (Discussion, p 32).
Reviewer response to rebuttal: Indeed, declaring a new brain area depends on much more work than is done here. Thus, the appropriate course here is to wait before claiming to have identified a new cortical area.