I’ll try not to keep you guys from lunch, but I just wanted to present some work from the C-Path Consortium that involved a number of people in the room, including Ari and Jason. And I have jokingly talked about the fact that generic questionnaires can generate a lot of rage or passion, should I say, amongst neuroscientists and scientists in general. This was another topic where I found myself quite surprised by how passionate people got about the topic. Coming from maybe my bias of a more technological background, I think the issue of enforcing compliance, which I’ll define in a bit more detail, I think from a more technological point of view, people felt no problem whatsoever, not an issue, why would you worry about ensuring patients answer the question before they move on. When you start talking more to commissions and maybe PRO scientists, you got quite a passionate alternative view that, you know, you should not in any way be enforcing compliance. The paper questionnaire doesn’t enforce compliance, therefore you should have something similar in the electronic versions. So it was quite an interesting revelation to see this dichotomy. And I think this presentation doesn’t provide necessarily any answers but it more provides considerations and things you need to bear in mind when you are thinking about this particular topic.
So for anyone who is not familiar with the C-Path Consortium, I’m sure the vast majority of the people here are, but it’s basically a group of stakeholders who are working to develop qualified questionnaires for various therapeutic areas—pharmaceutical companies, FDA. Within the PRO Consortium, there also sits the ePRO Consortium, which tends to involve the main ePRO vendors but then they also have links to the PRO Consortium and the pharmaceutical groups. And that group’s really trying to, amongst other things, generate some best practices around eCOA, how best to implement eCOA. And what I specifically wanted to talk about today was this last one, which I mentioned earlier about opting out, enforcing completion of questions on patients.
I wanted to kind of set the scene with four statements that I hope no one will find particularly controversial. The first one being that complete and accurate data is really the cornerstone of any clinical trial, and maybe to be more accurate, it’s as complete as possible and accurate data is really foremost the cornerstone of any clinical trial and deciding whether the treatment is working or not. I think paper has recognized issues with missing and inaccurate data. I don’t think my biases coming from an ePRO company necessarily renders this any less true. Electronic solutions are increasingly popular data capture tools. That can’t be denied and the stats that Michelle shared with us on the uptick of the electronic licensing of the electronic licensing of the SF-36 backs that up. And these solutions provide study teams with powerful new ways of collecting trial data, and patient-reported outcomes specifically.
I think beyond the various positive aspects of electronic data capture that we talk about—better data quality, reminders, lower patient burden, all that good stuff—particularly pertinent to this particular presentation is that we now have the possibility of preventing subjects from moving on to the next question until they’ve actually provided you with an answer to the question they’re actually looking at. So we can ensure that patients don’t move forward until they’ve given you an answer to the question they’re looking at. And at first glimpse, this seems to be suggesting that we now have the possibility of a complete PRO data set, which is a dream to any study team. But—there’s always a but—what if participants are confronted with inapplicable questions that they actually just cannot answer. So what if a participant’s being asked about work, how work is being impacted when in fact they’re unemployed. What if a participant is being asked about a very sensitive topic that makes them uncomfortable that they don’t actually want to answer, they don’t want to provide you feedback on this—questions around sexual health for example. And you might think this patient should know what they’re getting themselves in for a clinical trial, but for example maybe for a study looking at depression, you might not expect there to be questions about sexual health but there could well be. So patients might be facing these unexpected sensitive questions that they’re just not comfortable answering.
So what does a participant do in these particular scenarios when you’re enforcing completion, when you’re making them complete the question? They might just provide a response just to move forward with the questions, they might give you a random answer just because they want to get on to the next question, they want to complete the questions they’re being asked. They could be left upset by the question they’re being asked, which we—particularly if they’re being forced to answer the question, which we really don’t want to do, we want to limit burden as much as possible on the patients taking part in the study. And in a worst-case scenario they could, in fact, refuse to complete answering the questions or they might even drop out of the study altogether, which is obviously something we really don’t want to happen, we fight to get these patients into the study in the first place.
And kind of weirdly, this is almost, depending on how you program the system, this is almost an unexpected limitation of the electronic data capture because at least with paper, it’s very clear the patient skipped this question, the patient didn’t provide you an answer to this question, maybe they even wrote a note in the margins of the paper saying why they didn’t want to answer the question. It’s very very clear this participant did not answer this question, whereas with the electronic system where you’re enforcing completion, suddenly you’re not going to have any insight into the fact that well yeah this patient provided a random answer. You’re not going to be able to pick that out of your data set. So suddenly your lovely complete data set is looking not quite as good as you might think it is.
So in an attempt to get our heads around this issue, there was a writing group developed from the ePRO Consortium as I said, the number of people in the room were involved in that. And we set about basically just defining the three different approaches that we felt there were to enforcing completion in a clinical trial or using an electronic system. It basically goes from two extremes. So requiring subjects to complete all items on the instrument of the study. Then there’s some kind of middle ground where you require subjects to complete all items that are maybe particularly important, particularly important data, whether it’s primary or secondary data. Something particularly interesting in exploring, while allowing them to opt out of answering some other questions, particularly those including sensitive items. Or the other extreme, allowing patients to skip all possible questions that they’re presented within a study.
Each of these options have their own pros and cons. And it’s interesting, and I’d like to hear from people in the room their thoughts on which one they’re immediately attracted to. But some of the kind of key issues to be considered when you’re deciding which is the approach that should be taken is really around what the questionnaire is being used to be support your primary and secondary endpoints, as Ari and Emuella already discussed around the importance of having this strategy and understanding of exactly what data you’re trying to capture in the first place. And really having a key understanding of the questions that might be unanswerable or particularly sensitive within the questions you’re presenting to patients, having a really good understanding of what you’re actually asking patients to tell you. It can be very easy just to throw questionnaires in a study in a desperate attempt to get good signals from your treatment, but you really need to understand what you’re asking patients to provide you. And you really need to be taking into consideration cultural issues as well. Just because we might feel comfortable answering questions about how much we earn or various questions about sexual health does not mean that’s going to apply across all other cultures and countries as well so you need to be quite aware of what you’re asking participants as well as the country-specific sensitivities.
I think kind of the key message really and something that overcomes a lot of these issues is just having careful consideration for the questionnaires that you’re including in the study. Well-designed questionnaires should—should—mitigate a lot of these challenges. They shouldn’t be asking questions that patients might potentially not be able to answer. They should be providing a way for patients to progress through a questionnaire even if they’re not able to provide an answer for a specific question.
So some of the quite generalized recommendations that came out of the paper were obviously really carefully weighing up the importance of the data in relation to supporting your endpoints versus any potential burden you might be placing on the participant. Again, it always comes down to burden, where you’re asking a huge—it’s a huge ask what we’re getting participants to do. They're taking part in clinical trials, adding any additional burden on them is really unacceptable. Having an understanding of what’s the important data versus any potential issues within the questions you are asking will help you develop the statistical plan for dealing with possible missing data or blank data. But I think one of the key points, from an electronic data capture point of view, is no matter what strategy you take, no matter what approach you take, anywhere you’re allowing patients to opt out it has to be an active opt-out, you have to be getting patients to actively say I am skipping this question. Because otherwise, you don’t know whether they’ve just missed the question by chance. And it’s very different comparing or analyzing data where you just have potential data missing, random versus data where a patient has actually said, I’m not going to answer this questions, so you’re actually going to have a record in your database saying okay we don’t have an answer for this but we have a data point saying the patient actually chose to skip that question. I think that’s one of the key takeaway messages.
So just to summarize, the consortium has worked to develop best practices in the field including obviously this issue around whether you require patients to answer the questionnaires before they’re allowed to move on. I think important to remember that requiring completion can have unintended negative side effects. In reality this might be a very small concern on the scale of all the things we’re worrying about in the study, but it’s still something to bear in mind. As I said we want to keep patient burden as low as possible. Key point, any kind of opt-out has to be an active opt-out, the patient has to be saying I’m choosing to skip this question. And it’s really at the end of the day important for you to balance what’s important for the data you want versus what’s best for the patient.
And you can find that guidance on the C-Path website. So I’d be very interested to hear comments from people in the room. I guess two questions I was particularly interested in was: is there anyone in the room who thinks we should never ever enforce completion, we should always be allowing patients the chance to opt out of answering questions. Possibly? Okay so there is a few people who think.
AUDIENCE COMMENT: Thank you very much Paul. I’m from AstraZeneca. I’ll give an example just to illustrate what you said. So when I was doing my PhD at Penn State, there was an email that came from a fellow PhD student who was doing an online survey. So I said I’m doing a PhD, why not just complete this survey to help a colleague. So I went and read the informed consent that said that you can skip any question that you want if you feel uncomfortable, which is an informed consent issue. And I started the survey and going along I realized that some questions were not applicable to me. And I didn’t feel comfortable answering some of them because they were about depression, anxiety, and all that. So I tried to skip this question and there was no way, so initially I just randomly picked a couple of them and moved forward, as you pointed out. And just realized I could not. And I got very upset, and did the last thing that you said, because I thought it was misleading. You ask somebody through inform consent that if you continue the survey and there is a problem you should leave, and yet you didn’t provide room for that. So that is the perspective thing we should look at this from. In clinical trials, we ask patients that, if there is any time that you don’t feel comfortable with what you are going through, you can opt out of the study. So I don’t see why we should set up a system where that principle is violated. I think it’s an informed consent issue, that’s how we should look at it. And based on that, I think the best approach is to ask the patients to actively opt out. And on those screens we could find phrases that sort of convince them the importance of what we are trying to do, and if they still insist that this is something that they cannot, then they can skip from it. But I don’t want us to leave this place losing sight of the fact that there is some informed consent issue that is applicable to this. I don’t know how you want to respond.
No, I 100% agree, I think that’s a really key point, is that the informed consent should be where you lay out any of these things that might be potentially concerning for the patients, and flag up the point that they can drop out at any point. I guess, there’s a feeling of balancing making that clear that the participant can drop out of the entire study if they so choose at any point versus not—I guess almost encouraging the patient maybe to skip questions, and I don’t know if that’s a valid concern, to be honest, but I totally take your point that the informed consent has to be very clear about that and particularly what’s in the informed consent has to be reflective of what happens in the study. You can’t tell a patient that they can skip any questions and then not be able to, I think that’s very important.
AUDIENCE COMMENT: I agree totally with that comment and it actually might undermines our ability to get good data through ePRO as well. And the other point is that training and contrary to what Jason said, a lot of companies do have a PRO scientist go to do the investigator meeting training. And so at those trainings you do say that the data will be held confidential, don’t worry about this. And actually ePRO enables you to be reporting data confidentially. So they should have some assurance through the training as well as the mode that you’re collecting it in that there’s not going to be an issue. Why would you—if it’s a primary or key secondary endpoint, why would you even be in the study to begin with if you’re asking sensitive questions and the endpoints are focusing on those sorts of endpoints around sexual health or whatever. So I think that that’s really—I mean I would hope that we don’t go this way in the future, actually.
Yeah, and I mean I think in reality it’s probably not—at least in the studies we’ve been involved in—it’s not been an issue, to be honest. I think it’s something we need to at least have discussed and thought about.
AUDIENCE COMMENT: So just to add some perspective, I think this is following on to Sue’s point. When we were having the dialogue about writing this white paper, one of the issues that came up is sometimes you get into situations where the IRB won’t let you force completion. So for instance, in Israel, you have to allow the option to skip. Some academic IRBs will force you to—we had this experience at the University of Arizona. So in those situations you can’t go down the route of enforcing everyone to complete. The other issue is when you’re mixing—not that I would suggest you do this—but if you are going to mix modes and use say paper as a backup or allow some subjects to use paper, again to the issue of consent, because you can skip those items on paper, you then need to allow for the option for them to skip them on the electronic mode as well. So there are these nuances that you run into, and I think as Paul is saying this is an unintended limitation, or something that you wouldn’t expect. I think the practicality of its impact is probably pretty low, because people who agree to be involved in studies want to provide their data. We just need to be careful about what we tell them up front that they’re agreeing to do and then how do we implement the system, and I think Paul had a point on one of the slides that well-designed instruments should, for the most part, avoid these issues because as part of good instrument development we try to avoid things that might enrage somebody or might be sensitive or might not be applicable.
AUDIENCE COMMENT: Keith Wenzel from PAREXEL Informatics. So I’ve been involved in 50+ clinical trials. And it is absolutely true that occasionally IRBs will insert themselves or maybe an instrument author will insert him- or herself. But the reality is that the subject within the trials just are happy to provide the data. And the number of times I’ve heard about a concern about this, I can’t even remember. I mean in 10+ years of experience. So there’s also the patient perspective which is incredibly important. They’re pretty happy to provide this information without having an opt-out.
One hundred percent agree. And kind of as I alluded to at the start of the presentation, very often the quite extreme passionate reactions I hear about this issue is definitely not coming from patients. And as everyone is saying, I think it is on the scale of things a pretty minor point, but it’s something that comes up again and again with—at least I find in my role—it comes up again and again with sponsors asking, you know, what should we be doing or we won’t allow participants to skip or we want to enforce completion on this particular issue.
But you both touch on an interesting—which was going to be my second question, which is have participants—
AUDIENCE COMMENT: Sorry I just wanted to add. Shimon Rura from PatientsLikeMe. So you know, we deal in a world where any patient interacting with us is doing it out of their own volition, they’re not necessarily engaged in a specific trial or getting some payment at the end if they complete every step. And I’ll say that I think it’s worth thinking about the fact that, even when a patient wants to contribute to your study, is engaged with your survey, you might have a question that they don’t know how to answer or that they’re not comfortable answering, and the far easier alternative for them if they don’t want to just, you know, flip the table over and leave, is to just fill in some bogus answer or you know something that’s not really applicable. So for that reason I think it’s actually very important to think about offering an obvious way out, because it will increase your sensitivity to when your data might be getting skewed by a problem in the patient’s experience of answering it.
Yeah, great point. The second question I just wanted to ask the room, which has already been touched on, is have people encountered ethics committees, IRBs who have actually raised this as an issue. Now obviously Jason and Keith have. It’s not something we’ve encountered before, the IRBs or ethics committees have said, are you allowing patients to skip questions. I think potentially because they’re not actually thinking of that as an issue. The same potentially with instrument authors. But I’m just wondering, has anyone in the room run into an experience of submitting to an ethics board or an IRB who has specifically asked about allowing patients to skip with electronic systems?
AUDIENCE COMMENT: [inaudible statement] And so very different context. So you can maybe understand why they— I don’t know why they caught on to it to be honest, but you can understand that since the subject can skip on paper they wanted to allow them the same option. But we’ve recently heard—and this is over the past year—that—and I gave this example—Israel will not allow you to force all of the subjects to complete the questions. So you know, I don’t know how these issues get flagged up necessarily to an IRB, it seems like you’re creating a headache for yourself if you are doing that. That being said, I agree with Shimon’s point that on paper we don’t know why somebody’s skipping, maybe they flipped over the page and didn’t realize there were items on the back of the page and they just moved on. But at least if you have them confirm their intention to skip, as Paul was saying, you have a record in the data set, and it can flag issues with an item, particularly if you’re seeing a lot of folks not answering that. So I’m kind of on the middle ground, I think, you know, you should allow them the option but confirm that their intent is truly to skip the item. And then in the consent forms or in the instructions, you reinforce the need for them to complete all of the items, and the value that they’re providing to the study. So I think there are ways to encourage them to complete the items without encouraging them to skip at the same time.
I think the argument—and I know Jason that you’re not making this argument—but the argument that they’re allowed to skip on paper so we should allow them to skip electronically—just I hate that as an argument because it’s, well it’s this massive weakness with paper so we should reflect it in this quite superior form of data capture that we have over here, just to make things fair or something. So I think that’s a rift argument. I think there’s other better reasons why we might consider things that have been touched on here.
AUDIENCE COMMENT: I would like to see a study done to see if it actually matters to the outcome, if you do allow the opt-out in a sensitive area. And actually ask the patients whether it mattered as well.
I would guess most patients wouldn’t even notice. Because as we’ve said they’re just going to be answering the question and they won’t even realize they aren’t able to skip questions. Again, small issue but big discussion.
AUDIENCE COMMENT: Claudia McCormick from Novartis. I’ve seen the IRB ask with respect to paediatric questionnaires that are put on a device, as well as the parent or the caregiver version, if the parents or the patients actually have to answer each one. And then they’ve also asked, from a different perspective, when they see, based on the scoring algorithm for a particular questionnaire that we had on a device, you only allowed them to skip the ones where the scoring actually allowed you to have no answer. They questioned why there was a skip allowed here and not allowed on the other questions. But you know, upon showing them the method for scoring, they then understood and say okay it wasn’t on oversight. But it did encourage dialogue from their side.
Okay, interesting. Michelle and Martha, what are your guys thoughts around skipping?
AUDIENCE COMMENT: So what I was going to say I think you’ve already mentioned, which is that in the exhaustive process of developing and testing a questionnaire on paper, cognitive debriefing, do it on electronic, you should know which questions are likely to be problematic and it really shouldn’t ever be a surprise when you get to a pivotal trial and you say well how come they didn’t answer this question you know. That surprise should never happen. So if there’s a risk in getting good data quality on a really important concept and you’re worried that a self-report isn’t going to work, maybe you need to get that data a different way, maybe it’s an interview with a nurse or something like that where you can be a little, you know, approach it differently. But it shouldn’t ever be a surprise.
AUDIENCE COMMENT: I’ll add my personal opinion, aside from Optum—this is not an Optum view. But I think in electronic presentation of items, it is better to not have a box for don’t know or don’t wish to answer as an option that’s added, because that wasn’t in the original response choices, it’s not part of the survey. However, I agree that the option to not answer should be done. So to me the best way to do it is to have the items presented as they are on the survey without those and if they try to just move on and hit next, have something come up to prompt them and say did you really want to skip this, and then forces them to hit the skip button, which is sort of what different people in the room have been saying, you have a record that they intended to do it, and that kind of leads into all those things people have been talking about. They only really skip it if it really isn’t applicable.
No, I 100% agree with that. Personally again, I’m against the idea of including an explicit button sign skip this question unless, I think some of the EORTC modules actually have built into the original paper questionnaire, I do not want to answer this question. That completely makes sense. But I think for just general questionnaires that don’t have that built in already, I much prefer the idea of if you skip to the next without providing an answer, it confirms, you confirm I’m skipping, and you have that record.
AUDIENCE COMMENT: This seems to be a bit different from what you presented at the PRO Consortium last year. It seemed to be—maybe I’m remembering it wrong—that you were much more down the path of we’re not going to allow people to mess with completing all the items. But maybe I’m misremembering what was presented at that forum. So regardless I think the best comment I’ve heard so far is we need data. You know, we’re trying to make a decision about what’s best, the best way to answer that question is, let’s get some data to answer it. My PhD is in clinical psych, I feel strongly the patient should be able to opt out. I can tell you that the Beck depression inventory question on sexuality is by far and away the most skipped item on there. Maybe you’re not giving the Beck for a label claim, but do you still want as much information as possible? Maybe that means that you put questionnaires where you’re apt to get that later on in the survey. But nevertheless I think that the option of skipping it sounds very reasonable to engage patients in a way that they feel is non-threatening. And then I think finally, to play on what Michelle said, the ability to opt out of a question, and then have some splash screen come up that basically says, look, having complete data is so important, you know, blah blah blah, and you know, maybe send them a link to your website explaining—no, but the reality is something that basically encourages them—you do it the first time on that section and no other times, but you've reminded them, and then maybe you confirm the skips after that, but I think that you could embellish and encourage still on that first time that they go for it, with just a minor inconvenience so they know how important it is, but yet still let them opt out.
Most definitely, and I think more broadly in the trial, what a wonderful way to engage with the patient by really making sure they understand how important the data they’re providing you is. Because to them, it’s a bit of a black box of I press these buttons and then I never hear about it again. But if you really can explain from the get-go to the patient why we’re asking these questions and just how important it is that we get these answers then that really helps with that patient engagement piece. And I think probably the reason for any differences that we see is that my views have now mellowed a bit in the last year around this particular topic.
I think this topic is a great example of things that scientists such as people in this room can while away hours debating and arguing about. When you ask 99% of patients, they don’t care. Which kind of sums up health outcomes in general. But really interesting discussion. Thank you very much. And looking forward to continuing the discussion in the coming days.
[END AT 30:27]