The featured article is ‘Correlations in Social Neuroscience Aren’t Voodoo’ by Lieberman and colleagues and freely available here. This continues indirectly the analysis of the original paper (here). The Lieberman paper is a considered response which in my opinion mirrors some of the properties of the Vul paper. The authors firstly comment that Voodoo has ‘connotations of fraudulence’ on the basis of a book on science with Voodoo in the title which examined many issues including fraud in science. They also comment on the tone of the article.
‘Much of the article’s prepublication impact was due to its aggressive tone, which is nearly unprecedented in the scientific literature and made it easy for the article to spread virally in the news‘
This is quite an interesting statement and many have commented on the tone of the publication. The paper has provoked much debate not just in the public domain but also within the scientific community. On the one hand there is the question of the reputation of a large domain within neuroscience coming under public attack. On the other hand this has provoked a number of prominent figures to publicly debate statistical analysis within the fMRI field and keep members of the public interested in this issue along the way. Many people within the field have commented also and some have even suggested that the statistics are not always well understood. As with many psychometric properties, it would not be surprising if the understanding of statistical methods within the field was normally distributed – although the ‘mean’ would be expected to be quite high relative to the general population as the field would most likely select for those with an interest in maths either directly or indirectly through an interest in the neurosciences.
Liebermann look at the survey methods. They have gone to the effort of contacting authors of the papers on the non-independent list. However as with the Vul et al paper, there is no methodology section here. They are presumably using a qualitative approach to their analysis of the survey data. Not all authors were included and we do not know if there was a selection bias. We also do not know the questions that were used. What is quite curious is that the authors report that they use a single step inferential process and there is no explanation of why multiple choice questions probing two-step inferential processes were simply left out by the subjects if they did not agree with the choices.
Lieberman and colleagues address the issue of whether there is a two-step inferential process. Essentially they are examining Vul et al’s assertion that in the statistical analysis of the fMRI data the conclusions are drawn two steps away from the original data. However Liebermann and colleagues challenge this by saying that when the data is plotted – this is nothing more than a simple description of the first-step inferential. The first-step inferential in turn represents the inferences that are drawn from the original data – the correlations between the voxel activity and the observed behaviours or phenomenon.
Next Liebermann and colleagues conduct a simulation of the data and conclude that ‘76% of the simulated studies reported no correlation of r .80 by chance anywhere’ thus challenging Vul et al’s findings in their ‘simulation’. As with Vul et al’s paper there is no explicit description of the methodology within the paper. Instead the authors describe the simulation in the text attached to a diagram showing graphs of the data from the simulations. Again we do not know how the random numbers were generated and also the reason for running simulations which are essentially demonstrating established statistical properties.
In the next stage, the authors return to Vul et al’s non-independent papers and extract the data. Essentially as I had argued previously there may have been a selection bias in the Vul paper. Liebermann and colleagues do indeed find a selection bias and by taking these into consideration the authors show that the inflation figure would be 0.12 i.e the overestimate of the effect size. I didn’t particularly understand the next part. The authors argue that there should be a smaller correlation size for the whole-brain analysis. They use p-value threshold of 0.25 for the ROI and 0.001 for the whole-brain analysis. It can be seen that if a larger number of comparisons are being made then there should be a higher p-value threshold for choosing values to avoid false positives. What is not clear to me though is why these particular values are chosen – the assumptions are not clear.
The authors then go on to explain how the statistical methods used can produce an artefact. They argue that the t-test is more likely to show up correlations in data where they exist if the data is less variable. They make a correction for this ‘restricted range’. No doubt they could also control for other such properties of the t-test or indeed of other correlation measures used in studies. The authors then address the upper limits of the correlations and answer this with a number of points. Essentially they cite studies that show high reliabilitys of fMRI and psychometric data and show an upper limit of 0.92 in studies from the field challenging Vul et al’s paper. Given the title of the original paper which takes a swipe at such a large field it is not surprising that such results can be readily identified.
An interesting response to a provocative article, I wonder if this paper will have some historical value in due course.
If you have any comments, you can leave them below or alternatively e-mail email@example.com
The comments made here represent the opinions of the author and do not represent the profession or any body/organisation. The comments made here are not meant as a source of medical advice and those seeking medical advice are advised to consult with their own doctor. The author is not responsible for the contents of any external sites that are linked to in this blog.