Wearable devices are a current hype in many communications in clinical research. Apple Research Kit, AppleWatch, Samsung Pebble, FitBit and many more are omnipresent. So how new are these devices in clinical research and in which ways do they potentially add value to the research goals? What challenges have we seen during early implementations and what do future solutions look like?
So we’ll start with some spam here and introduce myself. I’m Head of Innovation at ICON. I’ve been in the industry for 17 years now, with ICON for four years. I’m heading up the product innovation team at ICON. And we have about—I always say I have an extended team of 12,000 employees worldwide that we work with. As of last week we have a crowdsourcing platform within ICON, so I can reach out to every single employee, I can put in questions that I have, problems that I have, and they can respond to that and tell me, and communicate with each other about solutions. And actually we’ve piloted that for six months, that actually works really well. So that’s why I now claim I’m heading up a team of 12,000 versus 10. And they know a lot more than the initial 10.
So we’re going a lot of stuff in research. We have a lot of research projects ongoing with CRF and by ourselves as well. So we’re looking into BYOD and wearables in the clinical research, in data analytics, and in performance outcomes assessments. We are pretty busy on the publishing side. We have authored or co-authored three books in the last 18 months, and there’s one more to come, probably early next year. About 10-12 journal articles, one of them was cited yesterday. And we have plenty of industry presentations where we basically try to drive the agenda of these topics—BYOD, wearables, data analytics, and now the latest is the performance outcomes assessments. So we see our role in innovation as doing the research, getting sponsors involved, getting the regulators involved, and just driving and pushing the envelope basically. And sometimes we’re more successful than other times, but it’s always fun.
So wearable devices I heard this earlier, so what. That’s usually how we approach projects. When somebody comes to us, a sponsor, colleague and goes this is the next big thing. So the first question is, so what. And I think I see discussed yesterday in some of our groups, I think that’s a good approach, so what. I learned that a long time ago in big defense meetings in another CRO where I worked when we had the slides to present to sponsors, there was one executive vice president and all he did, he looked at the slides when we thought we were done and said, so what. So you have to be sure that what’s on the slides makes sense and is meaningful to the audience, in this case sponsors usually. So it was a good approach, so I liked so what. I think it’s important to ask ourselves regularly so what. Because if we don’t do it, patients will.
The first wearable probably was around 1949, we claim. And as you can see, they have developed in size and in functionality. So this is a company in Boston, MC10, they now have a printable circuit where they can have, like a plaster, and you can probably read all of this stuff, but there’s some cool stuff in here. There’s an LED here, there’s a wireless power cord here. So they have all the function that you have on your mobile phone in that little plaster. This is only in prototype stage, but these are the things that we have to look at. This is where the industry is going where things are happening in the future.
I want to focus a little bit on actigraphy because I only have 20 minutes. Otherwise I could be here talking all day about the other stuff. So let’s focus on actigraphy for today as one example. This is not the only thing but this is probably the low-hanging fruit right now. So there are validated actigraphy endpoints already out there. I mentioned that before and I’ll say that again later in the presentation, it’s all about the clinical meaningful endpoints that we need to look at. So we need to look at the protocols first and then start thinking about how do we support a protocol endpoint. In actigraphy, in sleep, all of that work, most of the work has been done. As you can see, it has been done for 20 years. It has the approval of the American Academy of Sleep Medicine. And sleep endpoints—we’ve done some research and I’ll show that in the next slide—sleep endpoints are the most common actigraphy endpoints currently in clinical trials.
There are some about against activity. So when you—one thing we found that when we talked to colleagues and sponsors about actigraphy a lot, a lot of people have never heard that word, actigraphy. So it’s somewhat of an artificial term, so we try to talk more about clinical endpoints or activity versus actigraphy. But there are some activity endpoints that have been validated as well against doubly-labeled water, so going through a whole host of studies to prove that activity data is comparable against gold standard endpoints that we have. So you have total activity and energy expenditure, we see that in protocols.
And the changes in activity level, we had that discussion yesterday. So when you have a device like a Fitbit, and I like picking on Fitbits so let’s take Fitbit. So if you have there, it says you did 350 steps or 3500 steps, and you have another device that says you’ve done 2500 thousand. This is not a question of who’s wrong and who’s right because both are wrong and both are right. It’s about the next day, when you have 3500 again and 2500, then both are right, because you don’t see a change from one day to the other on the same device. The devices, even the Fitbits, are actually manufactured to a high precision, that they do measure the right things and they are very consistent in the measurements, although they are not a medical device. But you cannot currently compare different devices against each other. And steps is a bad endpoint to start with. I’m not going to get started on that one.
So when you look in the literature you will see validated endpoints that are published specifically in sleep and actigraphy. And usually around CNS, so a lot of pain studies are out there that have been validated. So the next question was, who is doing that work, right. We are trying to support that but we’re not doing that all by ourselves. So we did an analysis of clinicaltrials.gov and the European equivalent, and two or three of these paid-for databases, like I think Trialtrove and others. And so we found that there’s a lot of companies that have been involved in that one. The bigger the fund the more studies they have done. So you can see Pfizer, GSK, and Novartis are leading the pack basically, but Teva, Sunovion, Takeda, J&J are following suit. So there’s a lot of activity. And I don’t have a slide to show that over time, but you can see—this is for the last 15 or 20 years—you can see that over the last five years the curve is increasing, so we see more and more companies in the last five years dabbling with that and doing studies in that space.
So then we looked at the therapeutic areas that were listed. And again what we see is a lot of these CNS indications are on the forefront of that, which means in ICON we have a dedicated CNS therapeutic area group headed by a VP out of Paris. And we have regular meetings with them to discuss these things with our medics to define meaningful endpoints and have that conversation with sponsors for their protocols. And CNS again seems to be a low-hanging fruit because that’s where a lot of the activity currently is and there’s a lot of data around activity in CNS studies.
So then one of the questions that always comes up: is it only an exploratory endpoint. And we found that it’s not actually. When you look at the data, it’s like almost 90% is primary or secondary endpoints in these protocols. Now there’s still a big chunk of 11% where we couldn’t state from the protocol, couldn’t see from the protocol what it was or we didn’t have the protocol. But what we found is that this is not an exploratory playground anymore. Especially in CNS, companies do support pharmaceutical drugs in their clinical research and endpoints in submissions to FDA. And I do remember that probably about four years ago, probably four and a half years ago, at the PRO workshop in Washington DC by C-Path, Laurie Burke approached me, she wrote the PRO Guidance, so all of you probably know that and should know at least of her. I had the privilege to meet her a couple of times. She approached me and said what do you know about actigraphy, and I was just like, well I’ve done a study ten years ago, not the first one, but why, what do you need to know. And she said we just had the first submission—that was like four years ago—we have no idea what we’re looking at. So we need somebody to explain to us what we are actually looking at and how we have to interpret that data. So we know that it’s out there, and authorities are looking at it, and companies do submit that.
There is a lot of novel exploratory endpoints are being in development. And I think this is where right now most of the action will be and where we all probably need to focus some time and some efforts into, in developing new endpoints. And what I said yesterday as well in the workshop. I think we should not try—we made that mistake with PRO, that we tried to copy the paper. And like when I hear that you—and I’m not surprised because we’ve done the same thing—so you get a paper copy of a body diagram, and you just scan it back in so that you have it on your device. This doesn’t make any sense at all. But that’s what we did and that’s what we still do. And I think we’re at risk trying to do something similar with wearable devices.
So instead of trying to mimic existing endpoints, I think it would be way more productive to take a step back and start to think about new clinical endpoints that would support a drug submission versus having to mimic existing endpoints. Even when we do activity against doubly-labeled water, that gives it some credibility, I still think we need to step away from that one and just think very differently about how we support drugs in the future.
So conclusion. I think we have enough data that shows that actigraphy and wearable devices in that space can be used and are being used reliably. Regulatory conform can be submitted. But there is a few things that we need to think of. So again we’re coming back to the clinical relevance of the endpoints, we need to determine what it is the endpoint, what do we want to measure, how can any dedicated device, wearable device, support our claim and our endpoint. Then we need to look at these devices and we need to look at the validation. I love and I hate the word validation, because you can kill every effort with that and you can support every effort with that, it really depends how you spin it. So I would use it here to help us. So you need to look at the validation of the devices and the algorithms. But it’s more about the validation of the endpoints in this case.
So I think that we need to look at devices, and we don’t—we should look at, let’s say, groups of devices, when they’re one, two, three, and four, and use a wrist device A, B, or C. They change so quickly that I wouldn’t want to bank on any of that stuff. And we have done studies—again, we had one study with a company called Body Media, that’s an armband that’s being worn here for actigraphy and for galvanic skin response. Expensive device, early one, like if you had to change the battery the instruction was do it in 30 seconds or you lose the data. So the sites were just there with the second battery and it was just like a quick exercise. But it was a good device, it captured a lot of information, we did the first studies with that. Then they had built a specific software for us to analyze that raw data for our clinical research needs. So they got to the next version of the hardware and they had sold million, million and a half of the old ones into fitness studios in the Pittsburgh area and in the US in general. So we got the new devices and then they told as, ah yeah I forgot to tell you, it's not compatible with the software that we built for you to analyze that data. And we had already committed to a study for one sponsor. So we said, okay that’s a problem. So what we did is we offered, in Germany we had a distributor who had about 30 or 40 universities that would by devices from him. So he had 20 that he could contact and he said look, this company—us—we offer you a new device and a hundred euros if you send us your old device. So they all got a new device and a hundred euros and we got our 200 devices in six weeks, we refurbished them and used them in a clinical study and everything was good. We were lucky that we could get these 200 devices. So we have to find ways with the manufacturers to make sure that we can work even in long-term studies, or we need to find ways to be able to compare different devices over time, or different software packages. That’s going to be tricky but I think it’s possible.
Next thing, patient burden. We talk about that a lot. I’m chair of the ePRO Consortium Scientific Subcommittee and I’ve just submitted a proposal that we look into patient burden. We all know it exists. We all know we need to sort of consider it. But how. How do you measure patient burden. When is too much too much. Is ten questions a day more difficult than 50 a week. Is a handheld easier than a tablet, or is a tablet easier than an IVR system, or ten questions on an IVR on the diary enough per day or is it already too much. We don’t know that. We have a lot of hearsay but there are no real publications out there. So we just submitted that to the ePRO Consortium, so Paul, you’ll be part of that. We’re trying to do some research around that, can we somehow quantify patient burden, other than going to patients every time and asking ten patients how they feel about that because the ten may or may not be representative. Or 20.
And then data transfer and analysis. I think it’s always a little bit—I could have also written down operationalization, I think it’s always a little bit underestimated how much effort you have to put into the data transmission. It was mentioned earlier about data security. Let night when I was in a hotel room, I heard something about Marks and Spencer, they have a new loyalty card. And people who registered their card last night could see other people’s data on their website. According to Marks and Spencer it’s not a security breach it was just a glitch. And it wasn’t a hacker attack, it was just a software issue. So they turned the website off and they’re trying to fix it right now. And at no point would anyone see anyone else’s credit card, but what was reported that people could see what other people had bought on that card, right. So it’s just like me going to Tesco’s buying a couple of sausages and then you could see that I bought a couple of sausages. But I just was in Marks and Spencer today, there were queues to get that freaking card. I’m just really surprised that there is a big issue with that, and consumers obviously don’t care.
So sometimes I’m wondering how much of that patient centricity, and I’m trying to be provocative here, how much of that do we need to consider. We have people in my team and in ICON that say look, you need to include patient—and we talked about preference earlier—patient preference is not necessarily something we need to get too hung up on, because patients, even if they prefer something over another thing, they will still be able to do the other thing that they don’t like as much for a certain period of time. So that’s, again, I’m’ thinking about that Marks and Spencer example, and I’m a cynic to a certain level, and that just amused me, I have to say.
And one thing that we’re looking at, I think we need some standardization. So either standardization of clinical endpoints, standardization of data transmission, standardization of data formats, because right now you could work with five different vendors and five different systems and you’d get 20 different data sets and nobody knows how to interpret them. And you do that for every study for every sponsor over and over again. And it’s time, I think EDC and other initiatives have succeeded when they started to look at the collaboration between vendors and industry bodies to standardizing certain things. One thing, coming back to you guys, about the body diagram. Why can’t we, as the ePRO community and the ePRO Consortium, hire a graphic designer or UX company, and they build us a hand, left and right, probably we don’t need male or female, but to be determined, and have a body here and have a set of diagrams that all the vendors can use. And I’m sure we can talk to the authors and they will appreciate and most of them—some won’t but most of them—will and then we just get rid of that problem. And that’s, you know when it comes to these things I’m not only cynical I’m also frustrated. We talk about that for so long and we have all the means to do that. And while the vendors all compete, we have an ePRO Consortium where in the past we’ve collaborated very successfully, and I think this is one thing that’s an easy one I think to just sort that out. So another one I’m going to put on the science committee agenda.
Good. That was it for now. Questions later.
[END AT 18:10]