Review: Correlations and Multiple Comparisons in Functional Imaging – A Statistical Perspective

The article reviewed here is ‘Correlations and Multiple Comparisons in Functional Imaging – A Statistical Perspective’ by Marin Lindquist and Andrew Gelman and freely available here.

In the introduction the authors state that their training is primarily outside of the neurosciences and later in the paper describe themselves as ‘applied statisticians’. The authors also state that they will be commenting on the Vul paper from a statistical perspective as implied in the title of the paper. They begin with a summary of some of the points made in the Vul paper and some of the responses in the field before addressing the points one by one.

Firstly they note that Vul et al criticised the two-step inferential process before drawing attention to Liebermann’s et al’s response in which they conclude that such analyses are not usually performed within the field. They also comment on the first inferential step and second descriptive step in the process noted by Liebermann et al. They note that providing there is control for multiple comparisons the second step will not alter the fact that there is an underlying correlation although this correlation will be inflated.

The authors then look at the issue of reporting the results in the literature and note that if the reader is not helped to understand the meaning of the statistics then they can misinterpret the correlations:

For these reasons, the practice of simply reporting the magnitude of the reported correlations is somewhat suspect

Here is what I feel is a really important point. Part of the ‘rules’ of science are that the theory evolves through a critical process – the theories that remain after critical analysis and challenge by replication or other types of studies are by default successful – a survival of the fittest. However if the methodology is obscured then in effect, a paper is to some extent shielded from the critical testing ground of the scientific community. This in turn might be expected to slow the rate at which the ‘winning’ theories are successfully identified. Thus replication becomes more difficult if the methodology cannot be followed exactly and the findings are presumably more likely to be accepted by the community which could prolong the acceptance of false theories or beliefs.

The authors then go on to comment on significance testing

Indeed, it is well known that with a large enough sample size even very small effects will be statistically significant, and statisticians often warn about mistaking statistical significance in a large sample for practical importance

The authors go on to comment on the limitations of effect sizes in both small and large populations and also state the importance of describing underlying assumptions in the models being employed. Again I would argue that this latter point continues the theme of ‘transparency’ which should facilitate the testing process of the scientific community and hasten the process of ‘selection’ of the best theories. If you help more of the community to understand the steps in the process leading to the conclusions, the community should be more likely to identify any flaws in the arguments.

Then the authors come out with this interesting comment

There are many factors that affect blood flow in the brain and we probably wouldn’t expect the average scans of two different groups of people to be exactly the same

The implication is that subtracting the activity of voxels in one group performing a task from those in the corresponding voxels in the control group might need to be modified as there would be many ‘significant’ correlations across the brain even after the relevant corrections have been undertaken. They expand upon this and describe how the focus of analysis should be on characterising persistent differences.

Finally the authors propose their own ‘multilevel’ model for use in fMRI analysis. The authors argue that the voxel correlations should be corrected for using a measure of the distribution of correlations in the entire voxel population or within a region of interest. This rests on the assumption that the activity in all of the regions represents an identical phenomenon and that looking at the distribution of correlations is helping in interpreting individual voxel firing. Ultimately these types of debates might well be settled by experimental evidence. If intraoperative techniques or combination imaging methods can be used perhaps we might be able to use additional sources of information to make sense of the voxel firing patterns seen in fMRI.

Responses

If you have any comments, you can leave them below or alternatively e-mail justinmarley17@yahoo.co.uk

Disclaimer

The comments made here represent the opinions of the author and do not represent the profession or any body/organisation. The comments made here are not meant as a source of medical advice and those seeking medical advice are advised to consult with their own doctor. The author is not responsible for the contents of any external sites that are linked to in this blog.

2 thoughts on “Review: Correlations and Multiple Comparisons in Functional Imaging – A Statistical Perspective

  1. Pingback: Review: Big Correlations in Little Studies « The Amazing World of Psychiatry: A Psychiatry Blog

  2. Pingback: Reflections on June 2009 « The Amazing World of Psychiatry: A Psychiatry Blog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s