×

Premium Content - Please Submit to Continue:

First Name
Last Name
Company
Thank you!
Error - something went wrong!

Panel Discussion: Emuella Flood, Ari Gnanasakthy, Michelle White

May 20, 2015

Full Transcript

[00:00]

MODERATOR

Thank you so much to both of you for really great talks that really kind of bring into focus some of the problems there are with what we need to do, such as the simple idea of having a singular purpose and something that’s in the TPP, in order to have budget and have timing to work on it. Such as having the proper implementation in order to have something to actually show. If you don’t have enough people answering the surveys, it’s not going to get you anywhere no matter how much money you spent developing it and getting it in your survey. Really great points. On capitalizing what you do have, well, looking beyond what you might originally have thought. And then on thinking beyond the label, which is something that Ari started to talk about and you also mentioned, is a really great point. We get that a lot, people just really focused on this is what I need for this and they may miss a huge broader picture. And that’s something that Martha Bayliss will be talking about a little bit in the next discussion on some interviews that were done with payers about how they use PROs. So we’ll be able to hear a little more about that.

I’m going to start off with one question for the speaker as well. Someone goes around and looks for hands to take the microphone around. But while they’re doing that we’ll talk about, Emuella, you mentioned this 20 months, approximately, that it takes. And I’ve noticed sometimes when you do a nice proposal about everything you need to do to help someone and they say, oh that’s sweet, 20 months, that’s so nice, we can’t do that, we don’t have 20 months. How does a company deal with the lack of the time that they need to put into it?

EMUELLA FLOOD

Sure, that’s an issue. So sometimes, you know, sponsors will come to us and they’ll have a timeline in mind that is much much shorter than 20 months, and so then we need to try to come up with a plan that’s going to meet their needs but still also be rigorous enough to meet FDA standards. And so there are some things that we can do to try to shorten the timeline. Oftentimes one of the rate limiting steps is recruitment. So we can try to be creative about how we go about finding patients for the studies, maybe try to engage with patient advocacy groups, or you know, take some other measures to try to gather concept elicitation data. For example, maybe looking at blogs and patient forums and online sources to get the patient perspective that way. That usually doesn’t take the place of actually talking to patients, but what we can do is maybe start out with looking at blogs and patient forums and then move to do like a combined concept elicitation/cognitive debriefing, so we can combine that into one interview, and that can also help to shorten the timeline. There can be challenges with trying to combine concept elicitation and cognitive debriefing. So one of the challenges is that you go at risk. You might miss something important and so you’d have to make a major change to the instrument during the cognitive debriefing process. Also, the interviews can get quite lengthy if you try to combine them. And so, you know, while patients usually don’t have any trouble with the concept elicitation part—they enjoy talking about their condition and how they feel and they like having someone to listen—but the cognitive debriefing part is quite taxing and tedious and so trying to combine them can be difficult and lengthy, both for the patient as well as the interviewer, and so the quality can be sometimes not as good as if you have them as separate interviews. So that’s one of the challenges. The other thing you can do is maybe do a concept elicitation by phone, that can help reduce your time, even though we prefer in person. So there are different strategies that you can come up with to try to shorten the timeline. You can do things in parallel to some extent. Like the translatability, you could potentially do in parallel with other tasks. But certain things are pretty standard, like translation is pretty much it’s gonna be two to three months, and ePRO development is going to be a certain timeline, so some of those things aren’t really going to change, but we can do other things to try to shorten the timeline.

[05:02]

AUDIENCE MEMBER:  Maybe too—Shimon, not to put you on the spot—but I think some of, Emuella, what you’ve touched on there as well is the possibility that we’re seeing with groups like PatientsLikeMe of looking at maybe alternative ways of developing questionnaires. Not necessarily alternative but different ways of capturing the data as well. And I don’t want to preempt your talk tomorrow, but I think that’s something else. Two part question. I think that was a really good scene setting and kind of reminding all of us of the various ways that people in this room are kind of trying to feed into a lot of what you guys were presenting, so thank you for that. With the various caveats that you mentioned—I think we’re all agreed that earlier is better—what are the trends that you guys have seen over the last few years in regards to sponsors actually taking that on board, or maybe being able to actually make that happen, being able to identify the importance of PROs early on and to be able to follow through with that? And I know, Ari, you gave a couple of examples, but is it something that’s actually happening within the field?

EMUELLA FLOOD

It is. So we are being asked by sponsors to start early, so that they can put an instrument into Phase II. So it is happening. And even in areas like oncology, you know, where there might be less likelihood of getting a claim. So it definitely is happening with certain sponsors. And with others not so much, and we have to sort of make do with the time that we have. But I would say that, over time, it is becoming more common to have sponsors come to us early and advise on the strategy and to come up with a measure early. So it’s definitely happening. So I was a little bit surprised—and I know you were mentioning that there is a risk, that you could end up with wasted resources—but it seems, from what I’ve seen, the trend is to start earlier.

ARI GNANASAKTHY

So we need to separate out labels and non-labels. Okay, so if it’s primary endpoint then it’s critical. It’s business critical. If it’s not primary endpoint, then you ask questions. You start asking questions like, well do you really want to do this? Because there is a risk benefit associated with it, because if it pushes your timeline further then there’s an MPV impact. So they may take the view no, probably we don’t need a label. You convince them. Then you have a choice like going to other solutions like the PROMIS items, or other instruments. Even if they insist, say look, a label is critical because—take hepatitis C, for example—it’s a very crowded market—or for arthritis—we need a label. Then you say, okay let’s go for an approximate solution and great data. I mean that’s the other way of doing that. Instead of investing everything into a new instrument and coming up with all sort of workload, you minimize the workload and shorten the timelines and here is an option. And some people go with that.

EMUELLA FLOOD

I just wanted to make one comment though, with regard to understanding the payer’s perspective early. That’s something that doesn’t happen very often. And it’s something that we’ve tried to encourage. So you know, my colleagues in pricing and market access and health economics would say, you know, it should be an integrated approach, you should think about health economics and market access early too, and we’ve tried to do that. We’ve tried to recommend sort of that integrated approach, working across those consulting arms to put an integrated approach together, but what we often hear is it’s too early for that. So we’re not hearing as often it’s too early to think about the PROs, but it’s too early to think about pricing and market access, and maybe it’s because the people who are asking us to do the PRO work are different than the people who would be asking for the other work. And so maybe there isn’t integration within the sponsor to be able to do that. But I’d be curious to know what others’ in the audience experience is with that.

AUDIENCE MEMBER:  Maybe just to build and relate on that point, so how early are you seeing sponsors thinking about the mode of administration? So it seems maybe we’re biased in CRF Health, but we do seem to have sponsors coming to us and saying okay we’re going to go with electronic and our Phase III is going live in two months and we’ve just decided that’s what we want to do. Are sponsors realistically thinking about, well you know, this PRO and the way we’re going to design our studies really suits electronic data capture, are they thinking about that early on?

[10:10]

ARI GNANASAKTHY

So my short tenure at the consulting world, I see most of the smaller and medium companies are more tuned in to how things should be done than the larger companies. In all aspects, not just electronic data capture. I mean these guys, I mean I was in Boston visiting a whole bunch of startups, and they—senior managers, they were very senior managers in AstraZeneca and Novartis and places like that—they get it, they know what to do, they just want you to go and do it. It’s the larger companies where you have a whole range of experiences and people with different backgrounds, they don’t get it. That’s my experience. So most medium-small companies, they get it, and they come to us saying wow, electronic data capture, because they know the paper doesn’t work, for example. So anyway that’s my take.

AUDIENCE MEMBER:  Do you feel it’s just a time thing?

ARI GNANASAKTHY

No, I don’t think so. I think—I don’t know. Who is here from large companies? There we go. So where are you from, sir? Bayer. Okay. Am I right in saying Bayer is not a PRO-heavy company? Yes, it’s not a PRO-heavy company. And when you go with a PRO they are like, okay let’s get our papers together. I mean I don’t think there is anyone in Bayer who would naturally think, we need to get electronic data. I mean we got iPhones, all sort of things, Samsung 6 and all that, but when it comes to data capture, it’s not natural within the big companies to think electronic.

MICHELLE WHITE

I just wanted to add, and I’ll be talking a little bit about this in the next presentation too, that I think over time with SF-36 we’re finding that it is true that more companies are starting to think a bit in advance. Often the licensing is done very very early on. And we are starting to see that the requests are coming in for single-item forms and electronic more and more often. I’ll be presenting some data on that. So that’s good to see. But we still are seeing a lot of times where the paper version is licensed and then when the study comes to starting implementation, oops now we need to think about this for the eCOA. So we’re not all the way there, in my opinion.

ARI GNANASAKTHY

Let me follow up from what you just said. These guys do a great job because SF-36 licenses need to be done way early. And you start talking about electronic data capture and then the team comes to us and say, if you’re going to do electronic data capture there, we probably need to do the whole thing electronic. And then the game changes a little bit. So if it gives us some tailwind, at least when I was in Novartis anyway.

AUDIENCE MEMBER:  Hi, I’m Martha Bayliss from Optum. I just wanted to share an experience I had over the last couple of years, picking up on your comment, Ari, about small to medium sized companies. And this is in relation to a very small biotech company with maybe one or two compounds that they’re working on. And they figured out early that if they started to develop some PRO evidence about the condition and the impact and the treatment benefit, that that added to the value of their asset when they were going to go and market it to, you know, big pharma—Novartis, Bayer. So it’s interesting, it’s a different stakeholder than we usually think of—we’re thinking providers, payers, regulators—so there’s a little bit of a trend I think in some of these startups that recognize the value of the patient perspective in expanding and making their own assets more valuable down the road.

ARI GNANASAKTHY

Let me add to that. In 2014, that was last year, there was a drug called Serelaxin, the only drug that failed registration, the only drug that was destined to be fast-track approval that failed. Okay, Novartis. The primary endpoint was a PRO—dyspnea. And Novartis acquired that drug from a smaller company with absolutely no evidence whatsoever for validation or anything. It was a visual analogue scale, 10-point scale, blah blah blah. And the licensing process did not involve any PRO people, okay. So they looked at the visual analogue scale and said ah, well there it is. And it failed, okay. It failed for many other reasons, but it failed. Since then, all licensing activities would involve someone from the PRO team. And one of us would go to these meetings, and we killed a huge investment into a functional dyspepsia drug for that reason, because they didn't have from the small company the amount of evidence that’s required, and that would mean a huge risk. And I think people get that message now. So the smaller companies, they are beginning to do things differently, and there’s a lot of activities, positive activities going on there now.

[15:44]

AUDIENCE MEMBER:  This is Jason Cole with PPD. Two quick comments and then a probing question if I may. Ari, I loved your comment on training the site staff. It’s something that we almost never talk about. I can tell you that when I was at Covance we would often send a PRO scientist to the investigator meetings. And if you’ve ever fought to get a PRO scientist at an investigator meeting and then witnessed all of the staff utterly shocked by how many questions the site staff asked during that session, it’s great evidence. People at that point are convinced. I’ve seen, sitting through the entire IM several times, that the PRO sessions ask the most questions from the participants there. Really engaged, they really want to know about. And I think that you really can push the message of making sure that when that paper—dear God—is handed back to us, that we make sure that it’s complete, that we don’t answer questions for them, etc. We can go a long way there. Second point, about the funding sources. I know that with a lot of companies they actually do differentiate the staff based on if it’s research or if it’s commercial because of taxation reasons. And so you’ll often find a resistance to merging those two poles together. And so I think that that’s one of the simple roadblocks is making sure that you have the right people at the table as often as possible so that you’re not trying to convince a research person about getting the right tools in there, the right measurements in there for going to the HDAs. It’s just, they don’t really care about it. And so that’s one of the things that we’ve tried to do over the years. The probing question is, it was asked earlier, but I want to find out a little bit more—what is it, about 85% of the drugs that go through clinical die in Phase I. And if we’re saying let’s start in Phase I, we’re saying that 85% of our effort is going to die. I realize we’re not that blind, we’re not going to throw it at everything. But how in fact do we do a better job about targeting which tools to have the instruments? Ari, you mentioned is it label claim or not. Interestingly, and maybe this is just my experience, I find that it depends on the staff working on it and their background with patient-reported outcomes versus the actual applicability of that product to get a label claim or not, as to how interested they are in using a PRO, even if it’s not for a label claim.

ARI GNANASAKTHY

Yeah. Thanks Jason. A lot of comments there. The way I went about, at Novartis and now with different sponsors is that, don’t start development in the early phase. I mean I’m just cutting to the chase here. Instead, use things that are already there, like the CPATH, like the PROMIS items. There are some very good items and measures that are there if you want to see, is there a signal—does the pain improve, does the itching improve. There are a lot of items there. If there is promise, and if the Phase I data is beginning to look promising—and there is usually a gap between Phase I and Phase I—you can take some decisions, commercial decisions, to say for example, do we really need a label for itching when you have a psoriasis drug. I mean psoriasis is itching. So do we need one? And it turned out at Novartis for Cosentyx, the one I shared with you, it was deemed to be important because it’s a very crowded biologic market, and they said look, it gives us something to talk about. It’s obvious, but it gives us something to talk about. Then it is important, therefore worth the investment. So the Phase I studies, we did not do any PRO development whatsoever. We held back. But we did ask a simple question. Has your itching improved? It is a yes or no answer. And it improved. Okay so there is a signal, okay, and the lesions went down. So then we did things properly between Phase I and Phase II. Luckily it passed through Phase II and then things went to the north.

[20:27]

But the same time, I know examples. We did a development for heart failure symptoms. We did everything perfectly, Phase I, Phase II, Phase II. When it came to implementing the diary into the clinical program, the team said no. And this is one of my arguments why patient-reported outcomes should not be in health economics and outcomes research. Because it’s a separate budget, and as long as somebody’s spending it and doing things, they say yeah okay carry on with it, until you come to implement it. Then they will say, wait a minute, so you want to collect this data daily on 6000 patients for seven years? No, we need a separate server for this. And they say no. But, had we known then what we know now, we probably don’t need 7000 patients for this, because of the cost in the example, we probably need 500 patients from, you know, Atlanta and we’ll be done. We could have done things slightly differently, we know things better now. So the answer is, very early studies, there are other instruments and items that exist to get the signal to see is it worth investing. Then you go to the senior manager and say, look, we got a signal here. It’s worth the investment because the commercial guys are saying without it they’re going to have a flat market. Then you’re going to have fundingOtherwise you are just shooting in the air. 

EMUELLA FLOOD  

I just wanted to make a comment about training, because I think it’s not just with paper. I mean one of the things that comes out of cognitive debriefing and usability testing of electronic measures is not that we need to change the instruments or that there’s any issue in their understanding of the electronic version versus the paper. Oftentimes what you highlight are the things that you need to train the patient on to make sure they understand how to complete the instrument, how to use the device, and so it highlights those things, and it enables you to make sure that you do a proper training at the outset before the trial starts, so that you improve your implementation and your data collection.

AUDIENCE MEMBER:  I agree there completely. And the only difference we’ve seen is that ePRO is often invited to the table readily, whereas the question for a PRO scientist to present on the basics of how to implement the PRO administration, why we do it, what questions are applicable, how we check it, etc. Which kind of blends into what you’re talking about, but we often find that the ePRO people are built in to the agenda from day one, and they’re given, you know, 45 minutes or a breakout session, and the PRO people they’re like, we’ll get to you if we get to you. Thank you though.

ARI GNANASAKTHY

No, when Alma and I were working together, we used to get questions like, what happens if the patients don’t bring their glasses. What do we do? Or if they want—I mean there was a study I was involved in—if they want supportive services, so they became so distressed when they go through the list and they need psychiatric services, what do we do. So there are a whole bunch of things that’s not in the instructions to the patients or in the ePRO, but you are right, it’s outside there. Patient confidentiality, who is going to look at this data, how do you answer that? But it needs to be answered, otherwise you’ll get missing values. So all these things need to go into a training program. Yes.

MICHELLE WHITE

According to Ari’s watch, I think we’re about out of time for this session. Correct me if I’m wrong. We can do one more question? Okay great.

AUDIENCE MEMBER:  Hi, I’m  Kathleen Via from Jazz pharmaceuticals. I just have a couple of process questions, I guess, in terms of the development of PROs and some things that I’m going through right now as a matter of fact. One of them is around when you’re doing your cognitive debriefing, what do you do if you come up with new concepts? You’ve gone through your whole conceptualization phase, and now in cognitive debriefing something new comes up. Is that sort of like starting over in the sense that you have to add new items, etc. and then kind of go through the cognitive debriefing again for those?

[25:10]

EMUELLA FLOOD

I think first you have to decide if it’s something that warrants being included in the measure. So just because something new comes up, doesn’t necessarily mean it’s something that you need to include in your measure. So I think you need to evaluate that. You probably also would want to make sure that from a clinician’s perspective that whatever new thing has come up is actually clinically associated with that condition. Because sometimes people don’t know why they’re experiencing certain things and if it might be related to the condition or something else that they are experiencing. So I think you’d want to spend the time to evaluate if it’s something that deserves being in the measure. Oftentimes too with symptom measures, we don’t necessarily include every symptom that is associated with the condition. So you know, typically what we do is we come up with sort of a core set of symptoms that are most common and most problematic for patients, so they’re important but also common. And so we don’t try to include every single potential symptom often. If you do though find that you’ve come up with something unexpected, then you would want to go back and see, you know, explore if it’s something that is commonly experienced but for whatever reason didn’t come out in your initial work, and then you would need to develop an item for it and do some additional testing.

ARI GNANASAKTHY

Yeah I mean it’s very difficult to add anything more. I mean you said you are going through this process, maybe sound like a pregnancy or something. But after your first set of interviews and so on, once you got your draft conceptual model, that should agree with KOL opinion and literature. If it’s not a rare disease—sometimes that’s not the case. So first validation before you go back for cognitive interviews, to see if you captured pretty much everything that you had. And if you do come up with something that seems like—I mean it’s usually an outlier, and I take your point that you don’t have to include all the items. Which takes me to a slightly different point, a related point, is that there is going to be a poster at this year’s ISPOR where Amgen developed a symptom diary for psoriasis, Novartis developed a symptom diary for psoriasis, they are not the same. So two groups of people can go and look at the same disease and come up with completely different symptoms, but there will be overlaps. And this is why there was a paper by Dennis Revicki, 2007, after the draft guidance. He said, how can the agency be the arbitrator of what a measure should be, when different people can come up with different measures. Because it’s so subjective. So in your organization—I mean we come from completely different companies—you can go and develop an instrument for some disease, and we can go and do the same exercise and we could come up with completely different—they could have a seven-point response scale, we could have a ten-point response scale. They could have six items, we could have five items. They could be very different. So as long as you have good arguments for why you left that item out, in your mind as well as that can be convincing to the regulatory agencies, you are okay, I think. But it doesn’t mean that you will be okay. It’s just that you are okay until you get there. But then you see how the discussion goes. But this is not unusual. This PRO world is so good. It guarantees employment for exactly that purpose, it’s so subjective, it’s not exact science.

AUDIENCE MEMBER:  One more question, this has to do with age. So I’m interested in the idea that you’re going to have different conceptual models for young children, for adolescents, and for adults. Have you had the experience where you’ve tried to pursue these all in parallel? Like, if you’re talking about a whole range of ages, will you sort of do your whole process of conceptual models, etc. with these various different age groups using different methods like parents, etc. versus self-reported interviews?

ARI GNANASAKTHY

It’s logistically easier. I mean it’s possible that—if it’s one compound, it’s possible that there are three separate studies or three separate programs staggered, so you don’t have to do that. But it’s logistically easier. So you cannot just ask someone to say recruits these patients, young or old or adults and so whatever, and you know, come up with a conceptual model for each group separately. These are three different programs. I mean you can take mild arthritis, moderate arthritis, and severe arthritis, they are three different programs, so to me it’s the same as that. I mean there’s no difference. And that usually is the case in rare diseases anyway. You know, you have children, adolescents and adults and there are like 20 each in each study.

AUDIENCE MEMBER:  Yeah, we only deal in rare disease so it makes everything a lot more complicated.

ARI GNANASAKTHY

Right.

MICHELLE WHITE

Okay. Thank you.

[END AT 30:55]

Previous
Equivalence to Paper: Does it Matter Anymore?
Equivalence to Paper: Does it Matter Anymore?

Next
PRO Development Looking Toward the Future
PRO Development Looking Toward the Future