The featured article is ‘Understanding the Mind by Measuring the Brain’ by Lisa Feldman Barrett and freely available here.
The paper is divided into 3 main parts:- distinguishing reliability from validity, the elusive nature of error and within-subject dependencies. In distinguishing reliability from validity Barrett examines the underlying principles of measurements – that these measurements are stable across repeated testing and valid – that they test what we want them to test. Barrett argues that just because voxel firing correlates with a psychometric measure it doesn’t mean that the brain region correlates with the psychological construct. The correlation might instead be reliably representing another connection between the two. I will give an example bearing in mind this is my own interpretation. Suppose that the subject reads a question and gives a response. Each time they give that response – say it’s that they have feeling x at that point – there is voxel firing in a region Y (this is a simplification as the analysis would involve a number of transformations). The immediate response might be that region Y codes for feeling X particularly as this is highly correlated across subjects. However it might be that region Y is firing because the subjects are holding a visual image of the question in mind while they are thinking of their response. Thus there is a reliable measure but it doesn’t relate to the construct for which the question was designed. In the second section of the paper on error in measurement, Barrett looks at a number of different concepts starting with an equation – X = T + E where X is the measure, E is the error and T is the ‘true score variance’ which is later refined so that T also includes some further systematic error (another construct whose variance is reliably measured).
Barrett goes on to distinguish different types of reliability and validity and we are able to see that great care and precision must be taken in using these terms. Indeed these concepts present themselves as useful routine tools in the analysis of many types of scientific papers. In looking at construct validity for example, Barrett offers us this useful insight:-
‘In principle, construct validity can only be established by showing that a measurement is associated with an interlocking set of variables (a nomological net) that is dictated by theory; it can never be established with a single validity coefficient‘
Barrett also comments on Vul and colleagues suggestion about a split-half analysis thus
‘then it is not clear that we can avoid capitalizing on chance by splitting a data set in half, so that half of the data from all participants is used to determine reliability…and the other half of the data can be used to estimate validity…, as Vul et al. suggest‘
Barrett goes on to suggest an example of the sysematic error that can lead to spuriously high correlations. The example invokes the use of binary response sets – i.e. true false responses when it is argued that ‘scores on the two tests would be more highly correlated because they share a response format’. This was a tricky point for me to understand. The best interpretation I have of this is that if the number of responses on a test are decreased then there is less variation in responses possible and thus the variance is reduced which ultimately is a function of the response set i.e. we are measuring the tool to some extent. I may have misinterpreted this.
In the final section, Barrett covers intersubject dependence and essentially says that if we take several measurements from the same person they are going to be interdependent to some extent and makes this quite clear through the example of the neuron being part of the brain from which the phenomenological construct emerges. Thus there is bound to be some relationship between the two. The suggested solution is that this dependence can be modelled statistically.
Barrett’s paper is thought provoking and probably one of the more difficult of Vul et al responses to get to grips with partly because it is dealing with tricky subject matter which veers into almost philosophical explorations of psychological measurement which of course is where some of this discussion needs to take place. Indeed it is perhaps no accident that William Wundt who Barrett writes about earlier in the paper wrote extensively about philosophy in addition to his papers on medicine and psychology. This paper will repay close study not just for the immediate relevance to Vul et al’s paper but also for the wider methodological issues of how to understand human psychological functioning using quantitative analysis. It is tempting lso to question how qualititative methodology can be used in imaging studies.
If you have any comments, you can leave them below or alternatively e-mail firstname.lastname@example.org
The comments made here represent the opinions of the author and do not represent the profession or any body/organisation. The comments made here are not meant as a source of medical advice and those seeking medical advice are advised to consult with their own doctor. The author is not responsible for the contents of any external sites that are linked to in this blog.