The paper reviewed here is by Vul and colleagues and is a response to the responses to the original paper ‘Voodoo Correlations in Social Neuroscience’ (reviewed here). Vul and colleagues break the paper down into small sections each of which deals with responses to a certain part of their original article. Firstly they argue that their initial point on how voxels of interest were identified from their correlations has not been contested. They go on to identify support for their next point which is that there is inflation in the correlations calculated in the first point and that these of uncertain magnitude and cannot therefore be corrected for. In their third point, Vul and colleagues return to the issue of how ‘non-independent’ correlations are reported and cite some examples of these being described in the literature as representing the relationship between brain activity and behavioural/phenomological correlate. I think their argument here is that the reporting moves away from the recognition of inflated correlation values to an inference that this is a valid magnitude for the correlation which in turn implies the strength of the relationship between brain activation and the measure in question. In the next point, Vul and colleagues suggest that the magnitude of the inflation is likely to be large but avoid estimations and their justification is quite interesting:-
‘In any given study, the magnitude of inflation will vary not only as a function of factors that are easily determined (sample size, threshold, and number of voxels), but also as a function of some indeterminable factors (true effect size, noise of the measurements). As such the magnitude of inflation will vary wildly and unpredictably from study to study, leaving no possibility to correct or estimate the inflation of any given result‘
In a sense, this is a move away from empiricism (i.e. by estimating magnitude from a given sample) to an approach in which point in the estimation process is based on explicit facts or assumptions. The next point looks again at correlations and the authors conclude that none of the responses appeared to contain a disagreement with their point that the magnitude of the correlation is important. Essentially they argue that it is not enough to say that the correlation is non-zero – there is more useful information within the correlation value (presumably the true correlation value). The sixth point is about multiple comparisons and the authors focus specifically on one approach which has been used in papers and qualify the difficulties of multiple correction ‘..modern multiple comparison correction procedures can be treacherous’. In the seventh point, the authors recognise that their suggestion of a ‘cross-validation approach’ has not been accepted in several responses.
They then focus on a number of other points. They look at the sample size issue noting that small sample size has been considered as a possible cause of over-inflation of correlation size from Yarkoni’s discussion. They invoke the use of a ‘file-drawer problem’ from a 1979 Rosenthal paper. The authors accept the criticisms of their methodology in selection of papers but argue that the statistical methods discussed are employed across fMRI research. They then look at an issue of missing correlations in their paper and justify their original conclusions in this regards. However there is a slightly different type of selection bias that is possible and that relates to the ‘vague’ methodology (or at least vaguely described methodology) employed in selection of papers from which the correlations were derived. The authors then look at the issue of replication and ask for evidence of where replication has taken place demonstrating that ‘Brain Area A correlates with Measure Z’. Nevertheless the replication doesn’t necessarily have to occur in the same domain. It can for instance be indirect evidence from another type of study that supports the posited relationship. A surface electrode recording or a double dissociation study could for instance verify the relationship between brain region A and measure X. The authors then consider an argument about restricted range. Essentially the argument runs along the lines that if a voxel is chosen because it’s firing pattern is greater than another, then there will be reduced variability in that voxel which will be the property being examined. This is a reasonable argument as if information is encoded in the firing patterns of large groups of neurons, then both small and large magnitude of activity could code for useful information. However Vul and colleagues suggest that some of the point are debatable (although we do not know which) and then present some of their own calculations based on a comparison of non-independent and independent correlations before concluding that this isn’t so important because, as in the previous point, an estimate of effect size can’t be calculated from a sample because of the large number of confounders which would render the value meaningless. They then look at ‘impossible correlations’ arguing that their survey explains some of the findings of excessively high values. However responses to their articles have included explicit examples of correlations exceeding the ‘upper bound’. They finish by considering the general applicability of their arguments to other fields of research including this interesting observation.
‘Rather this problems arises with all research methods that generate a great deal of data, and in which only some a priori unknown subset of the data is of special interest. The problem we call nonindependence (referring to the conditional dependence between voxel selection criteria and the effect size measure) has been called selection bias in survey sampling, testing on training data that results in overfitting in machine learning, circularity in logic, and double dipping in fMRI‘
This is the final article which comes full circle from Vul et al’s initial paper through all of the responses to this final response and so ends the saga of the ‘Voodoo Correlations in Social Neuroscience’ (at least in that issue of the journal). Whilst reading this, I couldn’t help but return to the issue of how science isn’t separate from society. The debates in science call into question a scientist’s reputation almost as collateral damage. However, in these papers, there is no need to name the ‘red papers’ authors explicitly. For instance, the papers can be coded and these codes can be given to researchers on request who intend to verify the findings. Only a small percentage of readers will most likely want to verify the findings. Perhaps if such a method were used, the scientific critique which is so necessary for progress could become more refined and willing to address methodological issues.
If you have any comments, you can leave them below or alternatively e-mail email@example.com
The comments made here represent the opinions of the author and do not represent the profession or any body/organisation. The comments made here are not meant as a source of medical advice and those seeking medical advice are advised to consult with their own doctor. The author is not responsible for the contents of any external sites that are linked to in this blog.