×

Premium Content - Please Submit to Continue:

First Name
Last Name
Company
Thank you!
Error - something went wrong!

The Importance of Patient Feedback in Clinical Trials

November 20, 2015

Full Transcript

[00:00]

So thanks Rauha. Rauha’s presentation set the scene really well for what I’m going to present in the next couple of slides.

So I’m going to work from the assumption that we’re doing the usability testing, not necessarily from a case of cognitive interviewing and usability testing for PRO, but maybe for PRO and one of our solutions. So not just something that we’re doing for Phase III studies but, as Rauha said, very early on when we’re developing something like a diabetes solution, a conmed solution, maybe even looking at mixed modes in a kind of proactive way before we go to do that within a Phase III study. And it’s really just going to highlight through some examples, and I won’t say case studies, because there’s not enough time to go through them properly. But just giving nice working examples from real world, from usability studies and from patient feedback that we’ve had, and just to highlight the importance of patient feedback in this process.

So we’ve heard for the last two days, ultimately data comes from the patients, we need to be patient centric, we need to take patients into account when we do something, not just to design the protocol but right through to even the packaging and everything outside of the realm of what we do in this room. But we need to make it as easy as we can to get that data from the patients. We’re not the patients, at least not in this case. So we’re definitely not the experts, and we don’t always know best. So this highlights the fact that we do need to keep patients involved. Patient involvement is critical. I always go back to the examples of clinical outcome assessments because that’s what we all know as the default. If you generate a questionnaire from the scratch, one of the first things you do is find out what affects a patient, what items you need to record, certain principles within that disease or that therapeutic area, and you do that by talking to patients. You talk to patient groups, you talk to clinicians. Sometimes their view differs, so what patients want and what clinicians want don’t always line up. So patient recruitment for the very early stage is absolutely critical.

And it’s not just happening once. It needs to be refined, it needs to be repeated. You get them in, you test something, you take a solution, you move on. You either go back to that patient group from five, ten, maybe 20 patients. Or if the changes are so severe, you go back and take another ten, get a new fresh batch, and test them as if they were the case zero again.

Rauha showed something similar to this slide a couple of minutes ago. And it’s all down to the fact that it’s iterative—lather, rinse, repeat. Do it a couple of times. One round of testing, some patient feedback, improve what you’ve done. That’s not necessarily the best solution. Those patients, it might have been a focus group, they might not have been on best form that day, they might not have been the right patient population, recruitment might have been a bit off. Or they just didn’t get through because everything was so bad in your first draft that by the time they got things usable, you need to do another couple of rounds. And this can go on two, three, four times. I mean for clinical outcomes assessments, you might do one round, you might do two if it’s maybe an unknown questionnaire or a difficult patient population, but typically one round is enough for that. But when you’re developing something from scratch, you have no benchmark, so you need to have as much patient engagement as you can.

So for this section, I’m just going to pick up on a few points, just take real-world examples. As Rauha said, we can do focus groups, personas, creating virtual typical patients of the patient demographic, looking at the people that we’re targeting. We’ll conduct usability tests in its various forms and factors. And that’s all fantastic, you know, if you’ve got time to do that, that’s great, if you’re not doing it in the constraints of a clinical trial, if you’ve got a year before you even start thinking about recruiting patients, do this stuff early, that’s brilliant, right. You get it done, you get the patient feedback. But what are they actually saying? What happens to this data when you’ve collected it? Are you going to do something about it? And does that feedback cause us to change the solutions that we’re developing? Because not always is it the case that we can do that. I mean sometimes even with clinical outcomes assessments, we’ll do something, we’ll get patient feedback, the client will say oh look, timelines are too tight, first patient first visit’s two weeks away. Sorry, let’s just do the cognitive interviewing and usability testing, go live, keep fingers crossed, and just have that data at the end. People are nodding and laughing because it happens, right. You know, we don’t always have time and it’s a real shame.

[04:34]

So I’m going to take four particular examples of things that we’ve done within CRF Health, and there’s people in the room who have probably been involved in maybe some of the usability testing for PROs with some of these solutions and you might recognize some of the quotes. So now I’ll start with diabetes for a diabetes reference solution that we came up with. It’s basically an event-driven diary, which records diabetes symptoms, blood glucose, insulin intake, all of the things that we felt would be important to diabetes patients within a clinical trial, aside from the plethora of PROs that are out there. And we then came up with a conmed solution, so for tracking rescue meds or conmeds within a clinical trial. I’ll touch briefly on mixing modes, so looking at tablets and slates and different versions and even patient preference within Phase II, Phase III studies. And I’ll talk a little bit about rating pain on body maps. For anyone who was in the two workshops yesterday and today, you’re probably fed up looking at body maps and pain. But you’ll appreciate how difficult it can be to get that kind of information from a patient on the screen out of context, and highlighting the fact that there might be more than one pain point, and things like that.

I’ll start with the diabetes solution. So when I started at CRF Health, this was one of the first projects that I kind of took on. And I thought this is going to be great, I’ll make my mark. Start off, I’ll do online surveys, I’ll get focus groups. Before I even knew focus groups were terrible. My brother-in-law is in marketing and he says ignore the focus groups. He’s probably right. You can argue it. We did patient interviews, we did usability tests. And then we analyzed the results of that along with our previous diabetes experience.

And so the first thing we did, we did the online survey. We just sent out Survey Monkey, we got involved with patient groups local to Hammersmith, people we could find online. We met up in a hotel, we showed them the solution we’d come up with. We showed them what our idea of a diabetes reference solution should be. They initially liked it. They figured they had recorded most of this data themselves anyway. Some people did it in Excel. We had one guy who was like 85, he’d been recording his data for 40 years. He started off on paper, he moved into Excel. He used to carry around a list of all his medications. He was brilliant, this guy had so much data on hand when he showed up for this focus group, it was really really excellent. We had another guy who used to scribble on the back of cigarette packets. He wouldn’t tell his wife he smoked, he’d hide it. And he had little bits of data captured all around. But they were all collecting something, you know. Some people wouldn’t do it all the time, they would only do it when symptoms flared up. Or they’d only do it coming up to a GP visit. So it was getting collected but for various reasons people weren’t doing it in real time or in a kind of consistent manner. They all said they wanted visual summaries of recent data. Because you collect a lot of it, I mean, Paul is not in the room but Paul could definitely speak to this better than I could. But there’s  sheer amount of data coming from diabetes studies, it’s something that other patient groups don’t necessarily have to deal with, and it’s something that people who aren’t diabetic don’t necessarily think about either. When we started looking into this, we couldn’t get over the amount of different steps for different things you have to report. And it’s not always easy.

They liked the fact that we could show confirmation to say that something had been completed. And one of the teams in our workshop this morning did that. They had a visual representation to show tasks that need done, and then the icon changes to say that it’s been done. And that was something that came from patient feedback. We lacked that in the first round. I won’t say we got it drastically wrong, but we definitely weren’t right. Patients had a good feeling for it overall, but it was really the patient feedback that got us from what we had with alarms and reminders pushing them to do certain tasks into an event-driven solution, which basically allowed the patient to go in whenever they like to do the things within their own life without having to—they’re already carrying, Paul mentioned his bag of diabetes paraphernalia. You know, they’re carrying lots of stuff with them anyway. We had people sneaking off to the bathroom at work because they didn’t want people to know. So we wanted to have it so that they could fit it into their lives, take it out as a smartphone, pretend they’re doing something on Facebook, but they can actually enter data, and it’s less intrusive.

So we changed completely, we went from alarms and reminders and all the stuff we kind of typically think about, to this event-driven piece. And what they really seemed to like is the fact that we involved them. They were happy when they arrived in the room, they were delighted that somebody was looking at targeting data collecting within diabetes. They didn’t really like the full solution, but what they had given us back helped us to develop this, and it was those changes that really got us to where we are.

We did a similar approach with the conmeds, but for this obviously we had a few more needs. So we had sponsor needs, regulatory needs, we had feedback quite often from sponsors and clinicians that there’s a need for electronic capture of conmeds. I won’t put Norah on the spot, but if anyone wants to see the conmed demo, Norah in the blue dress in the corner will happily run you through the demo. It’s a really excellent system and it’s something that came up a lot again in the workshop this morning. Entering medication is not an easy thing to do. It can mean a lot of different things to different people, and there’s different medications from prescribed to conmeds to maybe things that are excluded from the study that you need to consider. So with all of those ideas in mind and some of the current problems that we looked to alleviate, we came up with this conmed solution.

[10:10]

And we see this diagram again, we did usability testing but we did it in three rounds. So each round we would take the feedback, we’d put it into the design again, we would improve things, go back to the patients. And we kept doing that until we got a better solution, something that was more intuitive.

And this came up a lot yesterday but you know, people always ask about elderly, every big defense I go to. What about elderly people, they can’t use eDiaries. Of course they can, we know they can. So we purposely, to preempt that question, seek out older people. And it’s hard to do, but we find people that don’t have smartphone experience. And we saw yesterday, I think Chloe, you mentioned how difficult it can be to find people who don’t have smartphone experience at this stage. So we targeted these people and ran through the rounds of iterative testing. And here are some of the findings that we had from that. Again, this came up I think in the workshop today and yesterday.

When you present somebody with a list of conmeds, that’s a really exhaustive list. And somebody when they said conmed list to me today, they did this—conmed list. And it kind of made sense because it is a huge list. And when we showed people two variations within the user testing—most people in this room are familiar with this kind of a layout—the search bar, the Google tab—people didn’t really like that. People preferred to scroll up and down through the list. And that’s something that we didn’t necessarily think would be true, and people—I can see people in the room kind of looking, hmm, that can’t be right. People said the same thing in the workshop. Why would you want to scroll, there’s too many drugs. Well, yeah there are, but we can have drugs that are most commonly used as the front. We can program it so that your current list of meds and what you’re prescribed is on top but the other stuff is below and you can scroll through. Also these drugs are really hard to spell. So if you tried to type something in here and you got it wrong, you’re going to be taken all sorts of places before you find the drug that you’re actually looking for. So, oddly enough, people actually did prefer the scrolling list.

Lots of information on one page, but without it being too cluttered. Again, in the patient population that’s 60-82, you would expect, you know, small font might be an issue. Talking about units, not everybody will get milligrams and micrograms and mls. But when we went through the patient testing, we found that combining related information on a page made it easier for people to use. So, I know I keep talking about the workshop, but if you look at having this as three separate screens, you might have one just for adding units, you might have another page with just the strength, and then you have another page that says I don’t know what I’m taking. If you put all of that on one page it gets a little bit easier, it’s more intuitive, and you’ve cut out three steps. So you make it that little bit easier, and it ties back to the will old people use this diary. Of course they will, if it’s designed well and it’s intuitive and we’ve taken patient feedback to build it. Of course they’re going to have no problems using it. And it’s a very simple kind of idea but we don’t always kind of keep it in mind.

The third one was actually thanking people. If you thank somebody for their work—and again this happened in the workshops—I think every team, bar one, put in a summary and a thank you page. And it’s amazing. Thank you goes a long way. Kids, adults, you name it, you say thanks to somebody and it can completely change the mood in the room. Or a relationship between people. So they were really kind of keen on the fact that when you modify a drug or when you finish a task you get taken back to a home page which gives you a confirmation so you know you’ve done something, you’ve migrated back, you’re not just left kind of in limbo halfway through a process. And you also get a thank you. It’s a lot more logical. It lets you know that you’ve finished a task. You can go back and do another task if you choose, but you’re also getting a kind of a digital pat on the back to let you know that you’ve done that piece of information for now, and you can go back at any time, or review, exit the diary, or go back and repeat that step.

Moving on now to mixing modes. This is something that comes up quite a lot. If it’s not can we have a paper backup, it’s what if we want to use two devices within a trial, what if we have a provisioned device that goes out of date, we want to then send something else in. What if we have remote sites who don’t have enough people. Do we send tablets. Should we send handhelds to certain people’s homes because they can’t make the trip. We all know from the cognitive interviews and usability testing, despite the fact that regulators might not look at it that way, there’s very little if any difference between those modes. And again that comes down to things being designed properly. So the cognitive interviewing and usability testing has been done and redone and overdone, and hopefully someday we’ll get away from that.

[14:52]

In this study we conducted cognitive interviews and usability testing because that’s what you do. And this is an amalgamation of a few different studies where we’ve looked at mixing modes. So apart from are the modes equivalent from the cognitive interviews and usability testing, we also had a specific question in mind of do patients have a preference, and if they do does it really matter. Will a patient choose not to use a handheld or web or paper because they have a tablet, or vice versa.

So here are some of the quotes that came back from interviewees. “I don’t really text, I’m not familiar enough but you know I could certainly use a handheld device, it wouldn’t really trouble me.” And this isn’t really hard data, right, this is just feedback from patients. But it’s powerful stuff because it shows that, again, if you get feedback from people you’ve answered your question right there, you know, you’ve asked about patient preference. People come up with things you wouldn’t necessarily think to ask. If you give them the chance to give feedback they’ll give it. You might get answers to questions you never even thought about. Somebody said, “This beats paper, because it’s better for the environment.” That’s an argument in itself. And now, “I would choose that over paper if I had a choice, I would feel good about it.” Feeling good about something if it’s well designed will encourage people to use it more, if it’s attractively designed, if they get a good feel for it, if they trust it. Paul mentioned designing things to look like they’re medically designed—blues and whites and lots of clinical colours. That helps people get the sense of yes, this was built by a medical professional. It’s just a really clever usability team that know how to make things look medical.

When it came to preference, people were talking about, “If I had a choice I’d pick a tablet,” but, you know, my answers wouldn’t change based on that mode of administration. And they like the fact that the electronic mode made you look at one picture or one image or one question, a body map or a VAS scale or any of the questions within a questionnaire. And it made you focus in on that piece alone and not start looking at the other six questions on that page. And you’re forced to kind of just select from the answers that were there, you didn’t have to go through a page or think about skip logic.

But what I really liked about this was the fact that people—and this came up quite a lot, I probably should have had that in a larger bubble and flashing and jumping out because it is important. We know from the usability testing and cognitive interviewing it doesn't matter. But now we’re getting proper patient feedback to say I really wouldn’t answer things any differently. So they’re telling you they’re not going to answer differently. You’ve also got data from the cognitive interviews to say they’re not going to answer differently. There’s so many statistical equivalence papers. I mean the two that we mentioned yesterday—so Paul and the team from ICON and the Gwaltney paper beforehand—that issue of equivalence shouldn’t be an issue. We don’t think about paper being so great, we never think about US sized paper and A4, right. But they are two different sizes. But we don’t think about equivalence there. So it’s unusual.

And body maps, this came up so much today and yesterday and it’s something that I’m looking at within clinical trials as a whole at the moment. Quite often we’re required to use body maps or images with selectable areas to highlight things within the trials, typically joint pain and rating pain, mobility issues, joint assessments. And there’s a lot of challenges with that. It’s not always consistent. So the images for one author’s version of a joint pain versus maybe a sponsor's version of what they perceive a joint pain body map should look like. And it’s that inconsistency between questionnaires within a designed package. You might have three questionnaires from three different authors, all with body maps but none of which look the same. Some are stick men, some are really weird looking kind of outlines. None of them are really anatomically correct. Some of them are more scary and laughable than they are actually professionally done. And they’re not detailed enough or they’re too detailed. And some of the images that exist, they’re just too difficult to use for electronic. You know, sometimes you do just get a paper copy of a questionnaire, not even a PDF, you might just get a paper copy and you’ve got to then scan that in and then somehow use that image because people might get hung up on if they can copyright it or an author needing it to be used because it’s their image. But it doesn’t always translate properly into ePRO. And there’s all the other considerations too—who owns them, who pays for it, where did it come from originally, was it originally copyrighted. You know, and do we actually have a file, an electronic file that’s usable between devices, between modes, and can we scale it up, can we tailor it to fit patients. So one of the questions this morning was, should the body map be male or female. Why not both, why not have somebody select that when they move into the questionnaire at the start. Have the clinician do it to say it’s a female. So let’s give them a female body map. And you can do that for lots of different things. You can make it as customizable as you need to be before the study starts.

[19:47]

These are some examples of body maps that we’ve seen over the years. They’re not really that great. You can see that probably no two is too similar. They’re not particularly consistent. There’s a chance that you might see some of these within the same trial. This was obviously rendered in 3D somewhere along the way but got printed out on paper, scanned in, got put into a jpg and got put into an eDiary. So you can’t scroll it and twirl it and do all the things that whoever designed it put a lot of time into. This kind of thing, classic from the VPI. It’s not particularly intuitive. We’ve seen patient feedback from this where they talk about the paper and the handheld and they say, yeah, easy, I can report data on both modes there, paper and electronic, doesn’t matter to me. But then when you probe a little bit further, they way well you know what, on the electronic version I didn’t actually realize I could rate more than one area for pain. If you ask why, they say well, it just says mark with an X. I said yeah. They said well with a pen I can just shade, just keep drawing, keep going till I get to the fingertips. You can’t really do that in that situation. Or at least you can, but you need to flag that up, you need to tell them earlier. And one of the things we’ve done in that particular study was to add in training, and that training highlighted the fact that for this body map you can do X, Y, Z. you can rate the pain, you can talk about which one is most severe, you can put more than one location. So you know again, patient feedback was what brought us back to do training for that. We implemented that then from usability study for the main study.

Some of the body maps then we see, I mean this is a very complicated system. You’ve got to talk about left and right, you’ve got to talk about head, you’ve got to talk about where it’s all moving, swelling, different pains, and then you can move in within the body map to individual joints and places. That’s not a foot, all right. it’s meant to be. We’ve started to call it the elephant foot. But it looks nothing like a human’s foot. And the hand version of this is very very similar. And you can see that’s not the kind of thing we should be presenting to patients. And that’s not something that as CRF we would design and say here you have this as our solution. This is stuff that comes from maybe sponsors or comes from instrument authors, because they had somebody who could use Microsoft Paint. So I think if we get patient feedback on this kind of thing and use all of these findings, you can come up with something that’s a whole lot better, that’s customizable. With vector graphics you can change the shape, the size, you can change gender. I mean there’s all sorts of arguments about presenting somebody tall and slim on a body map, and you just happen not to be tall or slim. You might be male or female, you got a certain shape male and you’re female. So it’s not really letting you kind of see the body map as you. It’s always going to be this alien concept. Hopefully nobody in the room looks anything like that. But if it looked more like you, you can kind of associate better. So there’s lots of reasons why something like this would actually benefit from patient testing.

And this is just some of the feedback we got from these studies. So the VPI was a classic, and select the area or put an X. And this wasn’t instinctively clear to people on the electronic mode, but on paper they intuitively just shade the areas that they had pain in. Maybe because people are just used to doing the with a pen, they can move around and scribble notes wherever you want. But it wasn’t so clear electronically. Patients didn’t like the fact that they weren’t drawing an X. You know, they can tap, they can have an X appear. But that again comes into the design, the original instrument author’s input and the fact that it’s designed for paper. So it would be nice to have a body map that can transcend all of these different instruments and solutions and just have something that stands in place of the body map that you have. And obviously get the author approval on that to be used in the place of what they have. And some authors we’ve seen actually are pretty open to that because they realize that paper isn’t the same as ePRO, certainly in that regard. No way to deselect an area was something that came up and it was confusion about that. But then they realized that if they—they were actually demonstrating to the person in the room and said, if I touch it I get an X but I can’t unselect, see. And they did that and it unselected. They didn’t realize that if they double tapped. So again that was something we introduced into training, we did an exercise to say if you double tap it, it will unselect.

So it’s again just highlighting it’s critical to have patient feedback. And if you can get input from them early on you can make all these changes before you go live. And it all goes back, exactly as Rauha said, time. It’s not alway something we have, it’s not always on our side. But if you can spend a little time now, you think of the time and potentially money and effort that it’s going to save you down the line.

So just to summarize, this is all stuff we’ve heard for the last two days. Thinking that the patient perspective is important. Patients should be involved wherever possible. And it doesn’t have to just happen once. It should be iterative, rounds and rounds of testing. What we deem important—and certainly diabetes was a classic example for us—isn’t always relevant to the patient. And we see that also with clinicians and sponsors and even instrument authors, you know. We might all want certain things from a clinical trial. Patients might not always see that as an important step. But I guess ultimately I guess this kind of sums up the last two days. Intuitive patient-centred design focusing on ease of use, created with patient feedback is what really should take precedent and should really be priority here if you’re thinking about setting something up.

And before I go, a shameless plug to our new website. If you’ve not been to it at all, the resource hub there is fantastic. I’m not just saying that because some of my webinars are on it. Chloe’s are there too. There’s lots of people in the room who are involved in that. And it’s not leaned towards CRF. It’s leaned towards sharing scientific experience and seminars like this. So if you get a chance, if you’re doing something on the train tonight, and you want to have something to read, go there, it’s really really useful.

And thank you.

[END AT 25:51]

Previous
A Beginner's Guide to eCOA for Pediatric Patients
A Beginner's Guide to eCOA for Pediatric Patients

<p>Process overview and some lessons learned during the design of an event driven eCOA solution to be used ...

Next
Using eCOA for Patients with Cognitive and Physical Limitations
Using eCOA for Patients with Cognitive and Physical Limitations

<p>Specific challenges and concerns arise in the developing, testing and use of COAs when the population is...