The paper reviewed here is ‘Automated Detection of Brain Atrophy Patterns Based on MRI for the Prediction of Alzheimer’s Disease‘ by Plant and colleagues and freely available here. There are two things about this paper i’d like to mention. The first is that I didn’t completely understand it. I could probably get to grips with it in full but with a few weeks of extra reading around the topic and discussion (maybe). I understood enough of it to get the gist of it though. The paper has relevance to the practice of older adults psychiatry if such applications as described here become widely available which isn’t the case at the moment. It shouldn’t be suprising that this is a complicated paper to understand. After all, it’s by an international collaborative of multidisciplinary specialists in psychiatry, neuroimaging science, neuroradiology and computer science. Potentially therefore the audience lies in those disciplines. At the same time however, the audience would need to have knowledge traversing a number of disciplines and I suspect that there would be an extremely limited number of people who would be able to fully understand the paper with no prior preparation. Rather than meaning that this is a fairly esoteric subject which will end up with a number of other papers collecting dust however, it has potentially important clinical implications. Read
‘The extracted AD clusters were used as a search region to extract those brain areas that are predictive of conversion to AD within MCI subjects. The most predictive brain areas included the anterior cingulate gyrus and orbitofrontal cortex. The best prediction accuracy, which was cross-validated via train-and-test, was 75% for the prediction of the conversion from MCI to AD‘ (my underlining)
The essence of what the researchers were doing was identifying a group of subjects who were likely to develop Alzheimer’s Disease and then image their brains using an MRI scanner. They needed to compare these with two other groups – those that already had Alzheimer’s Disease and healthy controls. Then they used a number of sophisticated analysis techniques to discriminate between those with Mild Cognitive Impairment who went on to develop AD and those that did not. They identified individual brain regions that discriminated between the subjects and even give a predictive accuracy of 75%.
However the above is contingent on a number of assumptions which can be individually questioned.
Firstly what can be said about the subjects in the study. Well although some of the demographic details are given such as the average age, there are a number of other factors which aren’t clear from the article (there is an associated data article which I wasn’t able to access at the time of writing – perhaps the data might have been included there). So for instance, were there any concurrent medical illnesses, what were the numbers of years of education, blood pressure, concurrent medication and so on. I assume that the subject group was german given the approval by a Munich based ethics committee although this is implicit rather than explicit in the paper.
The next point is the bottom line. There are 9 people who convert from MCI to AD and 15 who don’t. Essentially that’s the basis for the comparisons. It’s a rather obvious and often repeated point but a larger sample size for comparison with a well characterised sample would be expected to lead to greater reliability as well as a better knowledge of generalisability.
The MRI scanner is 1.5T. The larger the field strength, the larger is the possible image resolution. The subjects’ images were normalised to an anatomical template. There were some additional steps which involved ‘masking’ the images to remove the CSF leaving just white and grey matter. I didn’t understand the process used to achieve this end. I’ve made this point elsewhere but where papers are highly technical it would be good for the research group to create a video and upload it to YouTube (for free) and link to it in the article so the interested reader can try and get up to speed quickly.
The authors then explain the data analysis. The section on feature selection was unclear to me and although people in the field might read it rather easily, I struggled to understand the entropy equation. Entropy as I understood it was a tendency for a gradient of energy to equilibriate after time or to substitute information for energy with similar results. So I wasn’t clear on why this term was being used here and it would benefit from an explanation as above. There are references to other papers but this phenomenon of linking in with other papers behind pay-walls is either costly in terms of resources or unhelpful (indeed it would mean there was a hidden cost in those papers were a fee is required) although is probably not an issue in university departments with appropriate subscriptions (even here however this is not the case as some of the referenced papers can be in obscure journals that are not included in a university’s subscriptions). So after reading a bit further on, i’m not sure I understand by what the authors refer to as feature detection although the term is usually used in neural network terminology to indicate patterns in information that are identified by a neural network architecture. If this were the case, then the authors might be referring to the algorithm for learning in the network when they talk about entropy although it is still unclear to me.
Moving onto clustering, the researchers write that they are using an approach to identify ‘highly discriminatory’ voxels and ‘remove noise’. Presumably they determine this by choosing conversion to AD as the outcome measure. However on scanning through this section I was unable to find the terms AD or MCI and instead it was an abstract generic mathematical discussion using language that is probably relevant to a highly specialised field of neuroimaging science but it doesn’t gel with the language used in the introduction.
I found that the explanation of classification was slightly easier to understand relating both to the AD/MCI categories with a little reading between the lines and also the explanation of analysis is consistent with neural network architectures.
With a limited amount of time to read the paper (a few hours), i’ve moved quickly through the training and visualisation sections. These sections quickly move into symbols. Now the problem with these symbols is that they make sense to someone in the very specialised field but are next to meaningless for people outside the field. Again here an animation or talk through video would be helpful. Symbols tend to be an abstract representation acquired once a shared understanding has been agreed – a useful shorthand for communication within the field. The authors might question why this should be communicated to someone outside of the field – after all one of the purposes of the method section is to communicate information to other groups for replication. I would argue that it’s necessary for clinician’s to understand the reasoning behind the ‘knowledge’ which they will be using to make clinical decisions when such approaches become more widespread.
SPM settings were given and then the authors report the method used for assessing white matter lesions.
In the results section, by the time I reached table 2 I had two thoughts
1. The results here seem impressive – high accuracy in the 90%’s, good sensitiviy and specificity
2. How did they get to this stage (which relates to the above discussion)
Again in Table 4 (AD v MCI)
1. These results are impressive and I recognise the brain regions involved
2. How did they get to this stage?
Unfortunately it’s easy to understand the significance of the results. Without fully understanding how the researchers got to this stage however I am left with three options
1. Make no decisions. Seems like a waste of 2 hours.
2. Reject the results. Seems a shame as a lot of work has gone into this and the researchers will undoubtedly be competent in their respective fields.
3. Accept the results. Pragmatism. Unfortunately if I didn’t understand the process by which the results were arrived at then I have to rely on ….. blind faith.
The same applies to Table 5.
Moving onto the discussion (I skipped the other bits that weren’t as interesting), the researchers write that
‘Using AD and HC as training data and MCI as test data, we achieved an accuracy of 50%–75% to predict conversion into AD‘
The authors also acknowledge the small sample size. In the above, the AD and control groups have been used to train the software thus making use of all subjects in the study and not just the 24 with MCI.
So there are some potentially useful results notably a complex multidisciplinary approach to discriminating people who convert from MCI to AD based on MRI and computer learning algorithms. Obviously if these results are valid then it would be nice to have this set-up available in a research setting with a focus on trialling interventions in the high-risk group. Papers like this are going to become increasingly commonplace. If a research group has an effective means for predicting who will convert from MCI to AD then it’s going to be very important and will most likely be repeatedly used and refined. Then there will come a point at which the clinicians will have to get up to speed with this approach. Only this runs into the problems described above. There has to come a point at which each step in the process is translated into an understandable format accessible to clinicians. If not then the clinician in the future will end up receiving a few numbers, without being able to argue about the underlying reasoning or being able to point out errors and exceptions. In that case, the clinician becomes deskilled in the decision-making process. This is the risk of using ever more sophisticated technology and research paradigms. The clinician still needs to be ‘connected’ to the increasingly complex underlying process.
There are a number of questions I still have
1. What are some of the other characteristics of the sample e.g comorbid illness?
2. When are papers going to be rated according to complexity?
3. When are complex papers going to link to videos explaining the methodology/results?
4. Will papers get more complex as even more disciplines become involved in large projects with multistage methods?
5. Who is the ideal audience for this paper and which people shouldn’t be reading this paper? (I think the results here are relevant to clinicians working in the field of dementia although perhaps it would be more relevant as the described approach becomes more accessible).
6. Would these results be more interesting if we had baseline MRI scans decades before the subjects developed MCI for comparison purposes?
7. If the reader has to take a leap of faith in accepting the results of a complex study then on what basis is this made. Is it a simple reduction to the ‘calibre’ of the researchers involved including the university that they work at, their title, previous publications and so on and if so is this a reliable approach?
You can find an index of the site here. The page contains links to all of the articles in the blog in chronological order.
You can follow ‘The Amazing World of Psychiatry’ Twitter by clicking on this link
You can listen to this post on Odiogo by clicking on this link (there may be a small delay between publishing of the blog article and the availability of the podcast).
You can follow the TAWOP Channel on YouTube by clicking on this link
If you have any comments, you can leave them below or alternatively e-mail email@example.com
The comments made here represent the opinions of the author and do not represent the profession or any body/organisation. The comments made here are not meant as a source of medical advice and those seeking medical advice are advised to consult with their own doctor. The author is not responsible for the contents of any external sites that are linked to in this blog.