eCOA Insights and Trends: What is Burdensome?

June 13, 2017

FULL TRANSCRIPT 

eCOA Insights and Trends What is Burdensome

Speaker: Paul O’Donohoe, CRF Health

MODERATOR

So next up, you’ve already heard from him. My favourite beardy weirdy scientist. Some of you might have seen him a year ago when he looked slightly different—longer hair, bigger beard, person off the street. He now looks respectable. I’d like to introduce Paul O’Donohoe, who is our Director of Health Outcomes. Paul, if you’d like to come up, please.

PAUL O’DONOHOE

Thank you, Alex, and just to reiterate what Alex said at the very beginning, really great to see everyone here, this is a really really good turnout and a really interesting mix of people in the room. So I really appreciate you guys making the time to join us for the next couple of days. We have some really interesting presentations. And there’s some really interesting experience sitting with you guys in the room, so please do share and speak up.

I have a bit of a confession to make about my presentation. When we were putting the agenda together I knew exactly what I wanted to say in my presentation. And then I actually sat down to collate everything and realized I changed my mind as I gathered all this information and started putting it into the slides. So my presentation isn’t exactly what’s in the agenda, but I hope you can kind of follow the thought process I went through as I got to that point of actually changing my mind.

And what I’m going to be talking about a bit is sharing some data that we’ve been taking a look at. We’ve been introducing a new reporting system within CRF, and there’s a few other initiatives underway within the company, and so we’ve kind of been taking a look at some of the metadata specifically that we’ve gathered. And for those who aren’t aware, metadata is kind of the data that sits on top of the day-to-day data that we’re collecting on our electronic systems, so kind of the PRO data that Paul was talking about. The metadata is more how people use electronic systems themselves, things around compliance, how often they complete the questionnaires, and things around how long it takes patients to actually fill in the questionnaires. And so what I’m going to talk about today is exactly that, how long it takes to complete electronic versions of questionnaires. I’m going to talk about what might drive differences that we see in these completion times and what it might be about the systems that it makes people take longer or shorter to complete these questionnaires. How that might relate to compliance, so whether participants complete their questionnaires when they’re meant to, how all of that relates to this concept of patient burden, and Paul’s already actually touched on a lot of this and highlighted how important that issue of patient burden is and the impact that can have on your trial. And I’m going to try and tie that up together with some of this information that we can gather on the electronic systems. And basically what does any of this mean altogether, and take a bit of a broad overview of exactly what it is that I think we can use this metadata for within the clinical trials.

So as I mentioned we’ve been looking at some of the metadata within our system, I’m going to be presenting—this is all still very very raw, for which I apologize. Some of these graphs aren’t the prettiest. But in regards to the completion times, basically we have triggers within our electronic system so that when you enter into a questionnaire, you get a form-open data point, and then when you press save at the end of the questionnaire once you’ve finished completing the questionnaire, there’s a form-close data point. And so we can compare those two data points and as a result we get a completion time metric, how long you spent within that questionnaire completing the questionnaire.

And so we want to take a look at how long patients were spending completing their questionnaires and so to begin with we decided to take a look at the EQ-5D, commonly used questionnaire. And we had a theory that we might see differences between how long it took people to complete questionnaires on a handheld device and how long it took people to complete questionnaires on a tablet device, obviously a handheld device being something like a smartphone, tablet device being something like an iPad, much larger screen real estate. And so as I said the first thing we took a look at was the EQ-5D and so this is data from five separate studies, and there is not much going on there in regards to differentiation between tablet and handheld. This is a mixture of tablet and handheld and it’ll be much clearer in the next graph exactly which is which. And the only thing that really jumped out was what seems to be a very significant reduction in time to completion over the study visit. So along the bottom you have the number of times the questionnaire was administered for these different studies. It’s worth bearing in mind that that’s not an equidistant time point from administration 1 to administration 2. That’s going to differ on a study-by-study basis. But we felt it was okay to reduce this to just administration time, because really we’re interested in the number of times you’ve interacted with the questionnaire rather than how long between administrations.

[05:18]

So we kind of chopped this data a bit differently to try and see was there differences between tablet and handheld, and basically in the EQ-5D we weren’t really seeing any differences between how long it was taking. This graph is a representation of how long it took participants to complete Visit 1, which is the blue bar, versus how long it was taking them to complete the questionnaire in Visit 4. We figure that was a long enough time that they become familiar with the device and familiar with answering the questions. And really we’re not seeing any difference between tablet and handheld.

Fair enough, EQ-5D is one of the most simple questionnaires you might see in a clinical trial, so maybe we need to look at something a bit more complicated. And so we decided to take a look at the AQLQ, a pretty commonly used asthma questionnaire. We also got a bit funkier with our graphs. So this graph on the top is study duration by month, and that’s represented by the coloured bars, and that’s how long these five different studies are. And along the bottom then is that visit number scale again. And again, this is mix of tablet and handheld, and the tablet is this line in the bottom left, the blue one. So now it’s starting to look like, okay, maybe there is something a bit more interesting coming out here. And again, when we do the Visit 1 versus Visit 4 comparison, you can see for the tablet, which is on the lefthand side here, it certainly is much shorter completion time, 310 seconds versus 409, 488, etc. And again, dropping even lower for the Visit 4.

So it looks like actually potentially there is something going on here in regards to it’s taking people less time to complete the questionnaires on a tablet device compared to completing the questionnaires on a handheld device. But then we got to thinking, was that something inherent within the modality, was that just people found it harder to interact with the handheld device, for example, and maybe they’re a bit slower using it. Or was it in fact something in regards to how the questionnaires were laid out. And I’m sure some of you are joining these dots in regards to on the tablet device, very often we have multiple items on the screen as opposed to obviously a handheld device where we’re limited to a single item on the screen. And so maybe there’s something about the fact that you have multiple questions on the tablet device that you can interact with before you have to go to the next screen versus the handheld device where you have to tap next to go to the next screen every time.

And so what we really needed, once we kind of thought this through, was a questionnaire that was administered on the tablet device but that had two different formats, a multi-item version and a single-item version. And thankfully we had just that in the data set with the SF-36, which before about 2012, typically our tablet implementations tended to look very much like the paper version, basically, which is this version on the left. And then after approximately 2012, where the guys over at QualityMetric kind of really had formalized their requirements for deploying the questionnaire electronically, we’ve now moved to this single-item per screen on the tablet device implementation. And so we were then able to make that comparison of a completion time for the single-item versus the multi-item, and it actually came out in the data that we had, that it was taking less time for participants to complete the multi-items per screen versus the single-item per screen. So it does seem like there’s something there in regards to it is in fact quicker answering a few questions on the screen and then going to the next screen.

This wasn’t necessarily what I wanted to see in the data, if I’m honest. We are moving more and more towards single-item administration, even on tablet. It reduces the variabilities seen between devices. There’s so many different devices out there used in clinical trials that keeping it to a single item on a screen really kind of simplifies things. But I also kind of think that it’s worth pointing out that the disparity between the completion times isn’t gigantic. So it was kind of 570 for the multi-item versus just under 600 seconds for the single-item on a screen for Visit 1. And again, I mean they’re tracking in regards to time reducing over a number of completion numbers. So I’m not excessively concerned about this, but it was just very interesting seeing that there’s obviously something in the usability of the system in regards to how patients get on in interacting with multiple questions on the screen versus single questions on the screen that really kind of does make it a bit quicker for them to use it.

It was interesting when we were doing these analyses in general where you can use graphs, the flow of this learning process that we’re seeing or familiarity process that we’re seeing, with time for completion getting much quicker, reminded me of a presentation that our colleagues out at Adelphi Values actually gave a few years ago, where they were exploring completion times for a daily diary. So all of the graphs I’ve presented you are questionnaires that are completed every few months. Adelphi Values looked at completion time over a few days, and they saw a very similar learning effect. So I think it was quite fascinating seeing people get really really familiar with the systems and finding it much much quicker to complete the systems, and I think there’s probably a lot of work that could be done there in regards to maybe the impact on data quality as people get faster and faster over time as they’re completing these questionnaires.

[10:42]

So we kind of felt we were confident now that it’s not something inherent within the modality itself that’s causing people to answer slower on one modality versus another. It’s really about how the modalities are laid out or how the screens are laid out, and that was kind of echoed in our compliance data. Again, this is data just from a subset, this is about 200 of all the studies we’ve run. Our overall compliance is about 90%, this subset was including some particularly challenging studies that we wanted to focus on, but basically compliance on tablet across all studies was about 88%, with compliance on handheld devices being about 84%. So the compliances are pretty comparable, which kind of echoes that idea that really it’s not something inherent within the modalities that makes it harder for people to use, it’s more just about how we’re laying things out on the modalities themselves. It’s worth noting we could probably expect handheld devices to be slightly less compliant anyway, because typically you’re administering daily diaries, there’s much more administration points and hence much more opportunity for patients to miss computing and to be un-compliant with the daily diary, whereas something like the tablet we’re typically doing site-based questionnaires, and so obviously when you have someone sitting there, making sure you’re completing the questionnaire, you tend to be much more compliant. But this kind of reassured us that really there wasn’t necessarily anything inherent within these modalities that maybe made it harder for people to use them.

The other thing we really wanted to look at was maybe compliance across time, so we saw that completion times across time were reducing. But we wondered, was there any impact on compliance across time. So this is, again, a subset of studies, and there is obviously some significant outliers here that’s just some studies where the compliance definition was basically done incorrectly. So the focus is really on this top band of blue blobs up here, which is the compliance across time for between 0 to 2000 months. So this is across a large period of time, in fact that red line is one year. And we can see compliance is pretty much stacked towards the top of the graph even after the year point, so it seems like compliance stays consistent even as you extend the length of the clinical trial. Which was very reassuring, it seems like just because a clinical trial runs for more than a year, it doesn’t necessarily have an excessive impact on the compliance that we’re seeing within the system.

What would be really fascinating is being able to track compliance over time. So this is the overall study compliance, what we really should be looking at is compliance over time to see, does it drop off at all, and what that might be tied to. Unfortunately I don’t have that data to present at the moment, but maybe for the next forum.

So what does any of this mean? So what? Well basically the argument I’m making is that response time and compliance can act as very nice signals for patient burden. So patients taking a very long time to complete questionnaires could be considered burdensome. And arguably, the worse compliance that you see within a clinical trial could also be considered as a sign of high burden.

And this is obviously important for numerous reasons. We don’t want to burden patients is kind of the most fundamental one, but the regulators obviously also flag this up, it’s within our favourite document in health outcomes team, the FDA PRO guidance. “Undue physical, emotional, or cognitive strain on patients generally decreases the quality and completeness of PRO data.” So if the regulators raising this concern that basically if you are unduly burdening patients, you might see an impact on your data quality, and in fact they give a list of examples of what could be an undue burden, and the length of the questionnaire tops that list. So there’s obviously something here in regards to the impact that these kind of burdensome things might have on the quality of our clinical trial, or the quality of our data, which is fundamental to what we’re trying to do.

[14:54]

But what exactly is patient burden? And it’s interesting, we’ve just started some work with the CPath ePRO Consortium—and Bill Byrom, who heads up the vendor representatives on that consortium, is here, so if you have any interest in CPath go and talk to Bill. They’ve just recently started looking into this topic of patient burden, exactly what it is, and the interesting finding is really that there’s no agreed-upon definition of patient burden within the clinical trial space. It seems to be arguably tied to this concept of responder burden, which comes from the social sciences. And this is really around interviews, social science interviews and surveys, where they talk about the effort required on the part of the respondent for completing your interview. They talk about the sensitivity of the questions, conceptual sensitivity as opposed to psychometric sensitivity, in regards to asking difficult questions of the patient. And they talk about the frequency of participating in the interviews.

So these are obviously important concepts and do apply to the kinds of questionnaires that we’re administering in our trials. But it feels like there’s a lot more that probably goes into this concept of burden within our clinical trials, and Paul has already touched on a lot of these things that maybe we don’t even consider around disease severity, the prognosis of the patient, and hence their associated mental state, and any kind of support systems they might have in place around that, the actual type of research they’re enrolled in and the procedures they’re being asked to do, the number of other studies that they might be involved in—it’s entirely possible that this won’t be the only study that the participant is enrolled in—any compensation that they might be receiving for taking part in the study, the number of site visits we’ve already talked about, the location of those sites. And so it kind of almost feels like things like how long it takes them to complete the questionnaire might fall very far down this list of other things that really we should be focusing on when it comes to trying to mitigate and address this issue around patient burden.

All of that said though, electronic still helps. Anything we can do to try and reduce the impact of burden on a patient in a clinical trial is a very positive thing. I think all of any eCOA vendor you talk to will talk about anecdotal evidence they have of patients much preferring electronic completion of questionnaires to paper completion of questionnaires and stating a preference for wanting to use electronic in clinical trials when given the option. But there’s not too much kind of gold standard studies out there looking at this particular issue. There is one paper from 2006, which tried to do a review of all the literature and found nine studies exploring preferences in regards to timeliness and electronic versus paper completion. And they did find that 59% of patients expressed a preference for electronic completion within clinical trials as opposed to 19% expressing a preference for paper. And electronic was consistently seen to take less time than paper completion. So it does seem there is a trend towards patients actually preferring to use electronic data systems and those systems taking less time when compared to paper, which arguably is reason enough to use them.

So I kind of ended up with more questions than I actually answered as I was putting together this presentation. But I think the takeaway was really around the fact that eCOA provides metadata that can give us a unique insight into the patient experience of the clinical trial. It can act as some kind of signal that we might want to do something with. What exactly that insight is might not be as clear as we might think. Issues around compliance feels like a very clean signal, if your patients aren’t compliant, then they’re not completing their questionnaires. But there’s a whole host of reason that feeds into why they might not be completing their questionnaires. And so it’s not necessarily just as simple as sending a reminder to the patient, you might want to look at the structure of how you’re administering those questionnaires, you might want to look at the layout of those questionnaires. We’ve seen that has an impact on how long it takes them to complete it.

A proper definition of patient burden within the clinical trials would probably make it easier for us to address some of these challenges. And it came again to my mind, this seems quite similar to the concept of tolerability. So we talk about safety and tolerability in clinical trials as different things. Basically safety being the adverse events that you see, and tolerability being how much patients can put up with those inevitable adverse events. And that’s something that varies hugely on an individual-by-individual basis. And I think the idea of burden is probably something that’s also going to significantly vary on an individual-by-individual basis within the clinical trial. How much you’re willing to put up with to take part in that clinical trial. There’s so many things that are going to feed into that about your family life, about your disease, about other treatments you’ve taken beforehand that maybe haven’t worked. There’s just a whole host of things that feed into that concept of patient burden that I think it’s going to be a difficult thing to come up with a very generalized concept, but it’s, I think, a worthy endeavour nonetheless.

Thank you very much.

[20:05]

MODERATOR

As usual Paul gives us a presentation that provides insight to the real participants on the trials, the patients, patient burden. Who knew that the concept of patient burden was so complicated, and to try and put some standards around what that represents, it’s a task in its own right. And as both the Pauls have articulated, it’s certainly something that is becoming increasingly important as we start to run these clinical trials. So Paul, thank you very much for a very insightful presentation.

Can I open it up to the floor please. Any questions, any thoughts. When you ask a question can I just get you to stand up and say who you are, please, so that we can all hear and see you.

[Q&A section begins at 20:52]

AUDIENCE MEMBER

Yeah sure. So I’m Sam Blackman from Ono Pharma, sort of like a Japanese sort of pharmaceutical company. But just Paul, obviously on your slides, you mentioned a lot about the slates versus the handheld and how the handheld takes longer. And on one of the very last slides you mentioned that they’re actually both faster than paper. But do you have any metrics on paper, just that I’m interested in how much quicker are slates and handheld versus actual paper ones.

PAUL O’DONOHOE

Yeah, so there is a lack of data out there in that there are studies that have tried to quantify how long it takes to complete paper versions of questionnaires. I know that the SF-36, there’s a bunch of data out there in regards to how long it takes to complete the paper versions. And then we obviously have our data around how long it takes people to complete the electronic versions of our systems, and we do consistently see that’s a little bit lower. So it does seem to take less time on a whole, based on the information we have. There’s not necessarily been a large number of systematic direct comparisons of paper to electronic, and it can be a bit challenging to figure out exactly how they’re timing that, that start point and that stop point. So we have a specifically defined start point and stop point in our system, but how that’s timed in the paper world, so making that comparison can be a bit of a challenge. So there is some documents out there, and I can share some references with you. And it does seem like, consistently, electronic takes less time. But there’s not necessarily kind of a killer reference that proves that once and for all.

AUDIENCE MEMBER

Hello. I’m Elia Ohrenfeld from Oris Medical. If you are running a multinational clinical trial, I mean we spend a lot of efforts in creating validated translations, and I was just thinking that these metadata may be able to tell you if the different languages are working in the same way. Do you have any data on completion times for that?

PAUL O’DONOHOE

Yeah so that’s a really interesting point. Unfortunately at this moment in time we don’t have any data on that, but you could—you would assume in fact that you will see differences on a language-by-language basis because some languages take longer to parse, that’s just the nature of that language. And so you would expect to see those differences on a country-by-country basis. What that difference then means, I’m not sure what you can then draw from that. I mean I don’t think you want to get to a point where it takes the same amount of time to complete a Spanish version versus a Japanese version. Because of the nature of those languages it’s just going to take longer in one versus another, just because of how efficient the language is, because of the way they use the language. But I think that would be a fascinating analysis just to see those differences over different countries, so yeah, it’s something we’ll try and take a look at within our data set. Thank you.

AUDIENCE MEMBER

It’s Bill Byrom from ICON. Paul, that was really great, I’d really encourage you to carry on with that analysis of metadata, because I think there’s some really interesting questions in there that we just are unable to answer at this point in time. So you know, we quite often get asked how many questionnaires can we use in a trial or how long can a diary be for patients if they’re doing it every day. And I think probably in that data set that you’ve got, you can start to tease out answers to some of those questions. So you know, I think with the EQ-5D is a simple one and it’s obviously 5-6 questions or whatever, so that—but if you start to add onto that, you know, a daily diary, a couple of other disease-specific questionnaires, etc. and maybe you start to pull out studies which you use multiple questionnaires, you know, we might start to see some patterns. And that would be really very useful I think.

[25:00]

AUDIENCE MEMBER

Hi, I’m Rachel Dublose from [company name unclear 25:06]. Just a quick one about your compliance dots, this nice one. Can you just let me know if it was daily diary or monthly reports by the sites or was it completion by the patient at home daily or—all the dots?

So they were a combination of different kinds of trials, so there was some studies that were just site-based questionnaires, some studies that were a mixture of site and home-based questionnaires. So those dots was the overall compliance for the entire study. So it’s a mixture of the two kinds of study. So we do tend to see quite similar compliance for home-based and site-based questionnaires, but as I mentioned, it does tend to trend higher for site-based because the patient’s being supervised, because the nurse is often there to help talk the patient through and encourage the patient to complete the questionnaire. So there is a trend towards site-based being a little bit higher, but it’s not significantly so.

AUDIENCE MEMBER

Okay thank you. It was good to see it lasting after a year actually.

I was really surprised by that I have to say. Yeah, the fact it can stay that consistently high over that period of time was nice.

MODERATOR

Any other thoughts for Paul?

Paul, thank you once again very much indeed.

[END AT 26:25]

Previous
Discussion Panel: eCOA Best Practices
Discussion Panel: eCOA Best Practices

Next
Can Seniors Use Electronic Clinical Outcome Assessments (eCOA)?
Can Seniors Use Electronic Clinical Outcome Assessments (eCOA)?