Q&A with Dr. John Harrison - Part 3: Focus on Raters

December 16, 2014 Heather Bilinski

This is the third part in the 4-part series written following a Q&A session with Dr. John Harrison. During our conversation with Dr. Harrison we discussed the current problems with Alzheimer's Clinical Trials and the reasons behind the very low success rates for treatments. We then talked about some of the better instruments that can be used and how technology will help to turn the situation around in the future.

Dr. John Harrison is Honorary Senior Lecturer in the Dept. of Medicine at Imperial College London and Principal Consultant at Metis Cognition Ltd. Metis currently advises more than 40 pharmaceutical companies with the selection and successful integration of cognitive testing into their development programs.

John is a member of the American Psychological Association, holds Chartered Psychologist status with the British Psychological Society, and Chartered Scientist status with the Science Council.  He has authored/co-authored more than 60 books and scientific articles, including a popular neuroscience book ‘Synaesthesia: The Strangest Thing’.  

In case you  missed it, read Part I and Part II of this Q&A.

Part 3:  Focus on raters

Q: So considering the reality that currently, and in the near future, traditional tests will be used for Alzheimer’s trials, what is your view on how to at least assure that they are delivering the most reliable and accurate results? 

Harrison: My advice quite simply is if you want to run a successful study then you should pick really good raters. Any precaution you take after this is distinctly second best. A significant challenge at the moment is that sponsors tend to pick outcome measures that are very rarely used in clinical settings such as memory clinics. Consequently many scales are not well-known to site raters. In some cases, with measures like the MMSE, NPI and various ‘activities of daily living’ scales, we can turn most people into competent raters relatively quickly. However, for scales like the CDR-SB and the ADCS-CGIC, competent administration is about having the right clinical skills, experience of the disease, and extensive knowledge of the instrument. There is no quick fix for this, competence comes through experience. This point was recently highlighted by the EMA in their new guidance note when they wrote ‘The CDR-SB scoring requires extensive training’. Note that they also commented that the CDR-SB ‘is subject to variability among ethnicity and languages’. As Jeff Cummings and others recently pointed out, in the last 13 years of Alzheimer’s drug development we have only one successful registration - and a 99.6% failure rate. Current trials are large, long and expensive. Some of the studies I am involved with are two to three years long with literally thousands of patients. So, with that in mind we must pay attention to the issues like the quality of the assessors.

Q: So specifically, what are the most pressing challenges related to quality of raters? 

Harrison: Well, let’s assume the only option we have from a sponsor is to use these traditional measures. Then it is really about making sure they are administered properly. Unfortunately, it is fairly obvious to me and a number of others that these tests often are not given in the proper way, which seriously compromises your chances to see efficacy. So the focus needs to be making sure the sites are proficient at administering the tests. One way to address that is to offer training for the raters, but as I said earlier, this is okay if the scale is relatively straightforward, but for key measures like the CDR-SB and CGIC, if you don’t have the right person administering the test in the first place it is very difficult to quickly turn them into a good rater, which raises serious questions about who is allowed in a study to give tests.

Q: Do you think the use of eLearning to train raters and flag inaccurate data will help to alleviate some of the problems?

Harrison: Yes, absolutely. This approach is already employed by most of the better rater training companies. CRF Health is in the vanguard of smart data collection, so they have a lead on this issue. eLearning certainly holds the potential to create more consistency and efficiency when it comes to training raters. If you can flip the classroom and have people study online before they even show up for a training session, you are ahead of the game.  Then at the investigator meeting the focus can be on checking the raters’ thinking and chance to ask questions that will help determine if they are proficient.  

Q: Wouldn’t a move toward centralized testing in which rater and patient interaction is recorded, lead to improvements in rater quality, and, in turn efficacy of results?

Harrison: Not necessarily. Certainly, the inclination is to say this is a quality assurance issue, so the thinking becomes that we need to be more vigilant about keeping an eye on the raters. Therefore, the move over the last few years has been to take a centralized rating approach in which the assessment is audiotaped and in some cases videotaped. That way, you can see if a test was administered ‘by the book’. I think that intuitively this is appealing, but use of this approach should be evidence driven, and to the best of my knowledge there is currently no critical data that speaks clearly in favour. Not universally, but more often than not, raters and patients do not particularly enjoy the experience of being audiotaped or videotaped and raters can sometimes be focused on ‘by the book’ test administration for fear of facing criticism. They might then miss some of the more interesting aspects of the patient’s performance and behavior that is an important element of the instrument’s administration. A recent paper by Khan et al. on these issues in the context of trials of putative anti-depressant drugs highlights these risks.

Q: So what are some ways to improve efficacy using the currently used outcome measures that rely on raters?

Harrison: My perfect solution, and because it is perfect it may well be flatly impractical, is to create a model that is focused on attracting and using people who are, or have the potential to be, expert raters.  We could identify sites that already have good, experienced raters and consistently use them. One other possibility is to appoint an external expert to oversee the administration of the scales at a site. So this person would administer the program and would monitor raters and be in charge of quality control. Also we should look at the model in regards to who is responsible for hiring and monitoring the raters. Currently, the sponsor assumes the responsibility for making sure the raters are competent but that might not represent the most logical approach. Potentially, I think that shifting the burden of hiring and assessing raters onto those conducting and managing the trial might lead to better results.

Stay tuned for the final part of this series.

Best regards,

Heather Bilinski
Associate Director of Marketing, CRF Health

About the Author

Heather Bilinski

Associate Director of Marketing at CRF Health

More Content by Heather Bilinski
Previous
Q&A with Dr. John Harrison - Part 4:  Technology Advancements in Alzheimer's Trials
Q&A with Dr. John Harrison - Part 4: Technology Advancements in Alzheimer's Trials

This is the final part in the 4-part series written following a Q&A session with Dr. John Harrison.

Next
Using the patient perspective to address three main concerns in oncology trials
Using the patient perspective to address three main concerns in oncology trials

Although oncology trials account for almost 31% of clinical trials...