×

Premium Content - Please Submit to Continue:

First Name
Last Name
Company
Thank you!
Error - something went wrong!

BYOD: Patient Perception Across Screen Sizes

October 26, 2015

Recorded Live at the 2015 TrialMax User Group London Meeting. Presented by Paul O'Donohoe, CRF Health.

Full Transcript 

[00:00]

PAUL O’DONOHOE

Hello everyone. For anyone who doesn’t know, I’m Paul O’Donohoe, Director of Health Outcomes for CRF Health, so I head up our scientific team within the company. And I want to just take a little bit of time to talk to you about BYOD, bring your own device. I think, I imagine, everyone here, anyone who’s had any kind of brush with eCOA in the last year or so will have come across this term, BYOD. It’s a bit of a buzzword in the eCOA industry at the moment. I think it’s eCOA’s equivalent to patient centricity. If you’re having a conference on eCOA, you have to have a session on BYOD, even if you don’t really have anything to talk about.  Hopefully, we have a bit to talk about here. I want to talk about the state of the art at the moment, but also share some work we’ve been doing, exploring the area of bring your own device (BYOD), what that means for patients, what it means for the data we’re capturing. Obviously, we’ve been developing and deploying our own app, so we want to get a better understanding of exactly how that might work in a clinical trial setting.

Just to make sure we are all on the same page, though, to define BYOD, obviously the traditional approach to eCOA studies, the so-called provisioned approach, is where you provide all participants in the study or all sites in the study a stand-alone device. So you typically get a large number of the same device, you hand that out to sites, you hand that out to participants. So participants are carrying an extra device with them, but all participants within the study have the same device. So there’s consistency, they’re using the same system, the same device. BYOD, bring your own device, the clue is in the name, it’s all about taking advantage of the device the participants themselves have. So typically we’re thinking a smartphone, but theoretically, BYOD could extend to tablets, laptop computers, internet-enabled televisions, it could also refer to accessing a website or accessing a web app or a stand-alone native app. But traditionally when we’re talking about BYOD, when you hear people in this space talking about BYOD, they’re really talking about taking advantage of a participant’s smartphone and having that participant download an app onto their smartphone on which they would complete the study questionnaires. So that’s really what we’re focusing on here, particularly with the additional information I’m going to be sharing with you.

Why has BYOD become this big buzzword in our industry at the moment? I think there are two key things driving this, and then one other thing I would add. I think the reduced hardware cost is probably one of the most attractive things, certainly when we’re talking to sponsors. Hardware costs do make up not an insignificant cost or part of the budget for an eCOA study. So anything that can be done to reduce that is obviously going to be very attractive, and I think this is something that sponsors maybe latch onto, potentially a bit too much, around the potential cost savings that can be had in a BYOD study, because, at least in the idealized version, you’re pushing that cost onto the participants. The other big thing we hear talked about is the idea of decreased burden on participants because rather than providing them an additional device, a device they may not be familiar with, and an extra device that they may have to carry with them, you’re now taking advantage of the device that the participant has with them 24/7 anyway. Just people in the general public interact with their smartphones a huge numbers of times every day, so if they have that device on them, why not take advantage of what they’re already familiar with and what they already have. So the theory is, there’s a reduced burden on the participants in your study. I think another important thing that’s maybe not considered as much is the possibilities that taking advantage of that participant’s own device opens up, the possibility of additional data and novel data. There’s some really interesting work being done—there’s a group in America called Ginger.io who does some work on predicting when patients with depression are suffering a relapse based purely on the metadata they’re generating from their smartphone around number o of calls they’re making, the number of steps they take, how far they move as calculated by the GPS. But there’s some really interesting additional fine-grain data that you might be able to correlate with traditional PRO reporting that can give you a very interesting insight into the participant’s experience. So these are a lot of the reasons that are expressed for why BYOD is becoming this buzzword, as I said, but I think, to be honest, one of the key drivers is really Moore’s Law, that technology becomes cheaper, technology becomes better, and technology becomes more ubiquitous as time goes on.

[04:46]

This is some horrible graph from Ericsson, but basically, they are predicting that by 2020 there will be more than 6 billion smartphone subscriptions in the world, and that will be with a projected world population of just over 7 and a half billion. And obviously that’s not 6 billion individuals—I’m sure I’m not the only one with two smartphones—but that’s still a hell of a lot of smartphones that are going to be out there, and that’s a large percentage of the people we might ever be interested in getting into a clinical trial, owning a very powerful computer. So there’s obviously a great desire to take advantage of that. And certainly within the western world there’s an assumption that we can improve and make things easier in our day to day life using technology. Online shopping is just a standard thing we expect to be able to do now, we expect to be able to do everything online. Why not participate in clinical trials using a device that we carry with us all the time. I think this is just really the key driver. Technology is more ubiquitous, technology is cheaper. That’s why we’re seeing the space now opening up for BYOD to be a reality.

However—there’s always a a few things that stand in the way of BYOD being a mainstream approach in clinical trials at the moment, largely driven from the regulatory perspective, but then also a bit on the logistical side of things, of how we make things work practically. From the regulatory point of view, the real key issue that we run into is this idea of equivalence. Now, anyone who has worked in the more traditional provisioned eCOA space I’m sure has come across this idea and this term, equivalence, really being driven by what was mentioned in the FDA PRO Guidance about when an instrument is modified, that we need to demonstrate that instrument is still adequate—adequate, however you want to define that—and the FDA explicitly call out migration from paper to electronic as being a modification of an instrument. In standard wonderful FDA form, they didn’t actually tell you how you go about showing that an instrument is still adequate, and that’s where the ISPOR PRO Task Force came in. They developed a paper that’s specifically focusing on going from paper to electronic instruments and how you might demonstrate this adequacy. They define different levels, we don’t need to focus on exactly what goes into that, but basically you’re comparing the paper version, you’re comparing your new electronic version, and you’re demonstrating that they’re still the same. Either patients are reporting that they’re interpreting them the same way, or you in fact statistically compare the data you capture on paper, the data you capture on an electronic device, and ideally you’re not seeing a difference there.

So this idea of equivalence has been called out as being very important for the traditional provisioned eCOA market, but it’s also an important issue—and the regulators have flagged this—for BYOD. Now, this equivalence piece is something we’ve looked into a lot for the provisioned area, the provisioned device, going from paper to electronic. We’ve done so many studies, in fact there’s been meta-analysis being done on all the statistical equivalence studies, which are the most robust studies you might do to demonstrate equivalence. This Gwaltney paper is one of the key papers in eCOA. And they in fact foundfound—this was back in 2008—they found 65 studies of these really robust statistical studies comparing paper to electronic. And they took a look at the correlations between them and they found an average weighted correlation of .90. So a very high correlation between how people were reporting on paper, how people were reporting on the electronic version, and a very high relation between those. So people were basically reporting the same, so much so that they went so far as to say, “Extensive evidence indicates that paper- and computer-administered PROs are equivalent.”

This was back in 2008, so we worked with some of our friends at ICON to do an updated version of this paper. We found an additional 72 studies from 2000 up until the end of 2013. And again, we found an extremely high correlation, .88. So we didn’t see any changes from those older studies to the more modern studies. So we’re seeing a very high correlation between paper and electronic implementations of these questionnaires. Another thing I found particularly interesting was one of the key criticisms raised against equivalence studies is that the negative ones don’t get published, so that there’s a publication bias. We actually took a look at that in our meta-analysis, and we found that there would have to be 123 additional missing equivalence studies with a very low correlation that have not been published, before you would start affecting that very high correlation we found. So basically we didn’t find much evidence for publication bias, which I think was very very very reassuring, because that was one of the key challenges we were hearing against all this evidence we were collecting on equivalence, that basically negative studies don’t get published. And I’m sure that could be an issue still, to a certain extent, but this finding does suggest that it’s not as much of an issue that maybe we thought.

[09:48]

So we found all this quite robust statistical evidence that the paper to electronic implementations are capturing the same kind of data. But the issue we are now facing with BYOD is, within a single study, you might be facing a possibility of tens if not hundreds of different devices within that study being used to complete the study questionnaires. And we can’t possibly run those kind of equivalence studies for all those different devices. Something you might be able to do is test a range of devices, though, from a small device to a large device, and then make some assumptions about what might happen between those ranges, which is exactly something we decided we wanted to start taking a look at. Something that I think it’s worth bearing in mind is, paper to handheld—I would argue, and it would be interesting to see if anyone disagrees with that—paper to handheld device is a much more significant change than going from an iPhone 5s to a Galaxy Note, for example, I would argue. So based on all that really robust statistical evidence we have that there’s very little difference between paper and electronic implementation, what might we expect when we go from a small screen to a slightly larger screen. As I said, that’s something we wanted to start to explore.

So we ran a pilot study in 20 participants across a very wide range of ages, and more importantly a wide range of self-reported comfort with technology. And we really wanted to explore their experience interacting with an app-based version of a vaccine symptom diary that we developed on TrialMax App, across a range of smartphone devices from what we deemed a small device, which was the iPhone 5s, up to a larger device, which was the Samsung Galaxy Note. And we also wanted participants to use their own device in this study as well, so we could really explore how they got on interacting with the app on the device they were most familiar with.

Something I want to flag up at this point is this was very much a worst-case scenario approach in this study and to BYOD. Firstly, we didn’t offer any training to participants, we just told participants what we wanted them to do and then observed how they got on, following those guidances. So there was no patient instructions provided to them, there was no in-depth training provided to them, we simply told them what we wanted them to do, and observed them trying to carry out those tasks. If they got stuck, obviously we coached them to try out various things, and if they just couldn’t proceed then we would explicitly tell them what to do. But you would imagine in a clinical trial you’re going to be providing quite explicit training to your participants. So one thing to bear in mind for this study. The second thing is that in a clinical trial you’re really only expecting a participant to be using one device, their own device. Again in this particular instance we’re asking them to look at a range of different devices, a number of different devices, including devices they were not familiar with. So another aspect of this just to bear in mind a kind of worst case scenario approach that we took to the study, because we really wanted to understand the pure usability of the app, but then also the pure experience of a participant coming with no background, no understanding of the app, just to really see how they got on using it. 

And right away, one of the most interesting things—at least from my point of view—we discovered was that a large number of the participants struggled just to get the app on their own device in the first place. Nine, almost half of the participants, ran into issues getting the app onto their own device. And this actually was a wonderful reminder for me that I work in a very technologically savvy bubble, because I thought maybe one or two people—quite meanly, I thought some of the more elderly participants might struggle to get the app on their device—but I figured everyone would know how to get an app onto their device, that’s not an issue. Almost half the participants had issues getting the app onto the device. And the reasons for that were very eye opening. Some forgot the password for the App Store. It’s a really basic thing that we don’t think of, but you have to have a password to get into the App Store. Some participants just didn’t know their password. Some had incompatible software, which I think is always going to be an issue. Network issues, you’re always going to struggle to overcome that. Insufficient memory, so participants didn’t have enough room on their phone to download the app onto their phone. And one of my favourites, a couple participants were able to get into the App Store, download the app, but then couldn’t actually find the app on their phone, which was fascinating. Again, it’s not something that occurred to me as being an issue but was a very real issue for these people participating in the study. So I think—and I want to circle back on this point—I really want to emphasize that this was for me personally very eye opening. Again, we work in very technologically savvy circles. We work with people who are very comfortable with using smartphones, with using technology. That’s not our participants, that’s not the patients who we are going to be focusing on. And I’ll come back to this a bit more.

[14:55]

[Inaudible audience comment 14:55-15:10]

Yeah, which is a great point, but I think something we can sometimes lose sight of when we get all excited about the fact we’ll just give them an app to download. They have to actually download it and they might no know how to do that.

So focusing specifically on the issue of this differences between different devices and maybe how participants were interpreting and how they reported they might respond to the same question on different devices. So we gave participants a hypothetical scenario to report. So they had to log into the app, they had to report symptoms suffered after a vaccination, rate themselves different scales, respond to different user elements, and had to do this across three different devices. Ideally their own device and then two provisioned devices. Or three provisioned devices if they weren’t able to get the app on their own device. And then we were really trying to dig into did they see any differences in the different devices. Would they interpret any of the questions differently across the different devices. Would they respond differently across the different devices because of the different screen sizes. And the very positive news was, we didn’t really see any of those differences. Some very nice quotes. “They all look exactly the same. I’m really comfortable with the smaller phone, but would be happy to use all three and it wouldn’t affect the answers.” “There would be no difference in answers. I could comfortably do it on any device.”

Only three participants explicitly said they might see differences or they could—maybe not that they responded differently but they could see why people might respond differently. “I would probably answer the same on all devices, but if you were not used to a small phone, you could miss something and answer differently.” Kind of makes sense. “You may concentrate more on a big screen if it was flat on a table, so could possibly give different answers.” Not 100% sure I follow that one, but they obviously felt there was something about maybe the bigger screen space that would help you concentrate more. And one participant felt they might go into more detail on a device that is easier to use, for example typing would be easier on a larger device. Again, that makes a lot of sense. One thing I again point out here, is though these really seemed to revolve around issues of familiarity. They’re talking about not being familiar with the device, which is something we hope to overcome with the traditional approach to BYOD, because it will be using the participant’s own device. Again, this highlights this worst case scenario, as I said, nature of the study we’ve been doing, that participants were using devices they weren’t familiar with. You would hope in a really BYOD study, they’re going to be using devices they’re familiar with, which would hopefully overcome some of these challenges.

We also saw the majority of patients said they reported they would answer the questions the same on the app as the paper version. The paper version of the study was horrific, it was basically a stack of A4 pieces of paper, so it wasn’t very surprising to see all participants say they would prefer to use the app over the paper version.

So what does any of this tell us? This was a very small study, this was a pilot study, very exploratory. We just wanted to start getting a sense of really, when you are sitting down with a participant and they’re interacting with an app, what’s their experience first of all. But then also, is there going to be an issue around answering questions on different sized devices. I would make the argument, combined with the large amount of data we have from those very robust statistical equivalence studies looking at paper to electronic, that the kind of things we are haring from participants here in this study, suggest that the issue of equivalence with BYOD is maybe less of a concern than some people make it out to be. It’s definitely not a showstopper. I don’t believe we’ll ever get into a situation where participants will be answering so differently on an iPhone 5 versus a Galaxy Note that your data is completely incomparable. I don’t think that’s going to be an issue, and I think it’s less of an issue than it’s occasionally made out to be.

The key one, I felt, was technical support on patients and on sites without offering them any kind of additional support from the study, whether it’s up-front training, whether it’s a good help desk, whether it’s good documentation, it’s probably not going to work. You’re really going to—particularly for getting the patients set up on the app—you’re going to have to have some kind of robust system in place for supporting the patients and supporting the sites. We can’t assume the sites will know how to get an app onto the vast range of smartphones they might see coming into their study as well. So that has to be thought about carefully with the actual training that you’re providing. And it’s going to be a challenge because you’re not going to know all the different devices and all the different scenarios you might run into in your BYOD study. So I think generally good quality training for that will be a challenge, but it’s certainly something you can develop in a generalized enough way that you will be able to support sites and patients.

[20:16]

But the real key message for me was that participants with a wide range of experience with technology were very comfortable using the app after basically no training. We did see quite a significant learning curve. Some participants really were very slow using it on the first smartphone, but by the time they went through the same scenario for the third time, they flew through it and everyone expressed comfort with the idea of having to use the app and use the smartphone over a long period of time for a clinical trial. Some of the scenarios involved them having to log out and log back in and re-report symptoms and add data, so it’s not just the yes-no, there were a number of user elements they had to interact with. So hearing that they were comfortable and would prefer to use that app over the paper-based version—although as I said, I admit the paper-based version was horrific—that was reassuring here.

So the key outstanding challenges to BYOD in general. As I said, this was a very small study, this was just a pilot study. And it doesn’t answer fully this issue of equivalence. I do strongly this issue of equivalence is not an issue, inasmuch as it’s not going to be a showstopper for BYOD. I do think the question still needs to be answered to the satisfaction of the regulators. The regulators have said they do have some concerns about the comparability of data captured across a wide range of devices. We are part of the C-Path ePRO Consortium, which I’m sure some of you have heard of, which is regulators, sponsors, and vendors working together on various PRO development, but also developing best practices for eCOA. And one of the things that that group is looking at is the possibility of running kind of almost the gold standard BYOD study to try and answer some of these questions. Whether that will happen, I’m not sure, death by committee seems to be a common occurrence at C-Path unfortunately, but they’re at least talking about it. The few papers that have just gone to press, actually, that we’re a part of as well, that kind of tries to set the scene on BYOD and talk about some of these challenges and how we might address them. I think some of this equivalence data might have to be driven by the sponsors as well with obviously support from the vendors. I think someone is going to have to talk to the regulators—and this seems to terrify sponsors as soon as I mention this—but talk to the regulators that we think this is a good study for BYOD, this is why we think it’s a good study, this is the population we’re interested in looking at, this is how we’re going to address those participants who don’t have a suitable device, we’d like to use BYOD even if it’s in a subset of that study. So you can then capture real world—if you want to call it that—BYOD data and compare it to traditional provisioned device eCOA data to show that you’re capturing comparable data. I think that’s something that would be a powerful answer to this equivalence issue.

[Inaudible audience comment 23:17-23:23]

Whether you can merge and compare data captured across lots of different devices within the study, so you have 100 participants in your study all answering a questionnaire on different devices, is that data comparable. Can you pool that data and use it to drive your endpoints for example.

[Inaudible audience comment 23:41-44]

About equivalence, I think their actual biggest concern is the second issue around logistics, around what do you do with participants who don’t have a suitable device. They’ve explicitly said this, in a kind of unofficial capacity, that they really want to—if anyone’s coming to them with a BYOD plan, they want a robust answer for how you’re going to manage participants who don’t have a suitable device, because they have this under a bias, you don’t want to be excluding patients just because they don’t have a very expensive smartphone, for example. I think it’s quite a straightforward thing to answer, you have your provisioned devices for those participants. But then you do run into issues of, well, how many devices do you provision. Just because you have 75% saturation of smartphones in your target patient population, will they all be able to get the app on their phone for example. It’s definitely not a showstopper, it’s not an unanswerable question. I do though feel it’s probably a question we don’t have our heads around just yet, as in how to get that exact calculation for how many devices you might provision. But thankfully it’s not an unanswerable question.

[Audience question 24:44-25:00]

[Did the regulators ever ask what size paper you printed questionnaires on?]

Well I think this—not to get a bit rant-y—but this highlights the inherent bias against electronic. We make all these assumptions about the quality of paper, in fact we don't even make the assumptions, we just don’t think about it. And for example, with the equivalence studies that I presented on earlier, that equivalence study is comparing paper to electronic, and we want to get a high correlation, so we want the same kind of answers on electronic as we do on paper. Why would we want equivalence with an inherently weaker form of data capture? That never made sense to me, but that’s because paper is the gold standard technology.

[Audience comments 25:43-26:10]

[You mentioned that in the pilot study you were focused on user experience. Is there any concern about equivalence between the devices themselves?]

So for example, the operating system used on the smartphone? Yeah, so the devices we used, there was an iOS device—an Apple device—and there was also an Android device. And thankfully they generally look very very similar and the way you can build them means they basically look very very similar. So beyond some very subtle visual differences that’s not really an issue.

[Inaudible audience comment 26:33-46]

All the back-end stuff you make pretty much the exact same.

[Inaudible audience comments 26:52-27:25]

But I really feel that’s almost an educational piece that we need the participants to understand what it is we’re doing and why the data they’re providing is so valuable and why we have reminders going off every morning. Really empowering that they should understand that there’s a reason why we’re asking for this data, it’s important data, it’s meaningful to us. And there has been concerns raised about what if they’re using a phone, they’ll also be on Facebook, they’ll also be getting phone calls, they’ll be getting notifications. But I think, again, it goes against this inherent idea that paper is amazing. You’ve no idea what participants are doing when they’re completing paper. They could be watching television, they could be actively on the phone. So I don’t see that as an argument for why BYOD might be a challenge. I think again—

[Inaudible audience comment 28:05-28:10]

I think this in a few years’ time is going to be in a similar position to where we are with eCOA in general, eCOA has now hit the mainstream, as John was talking about earlier. Very few people think, no eCOA is not going to work, let’s stick with paper, that’s just a silly idea. Give it a few years and everyone will be like, yeah obviously BYOD, now we have to worry about the Google chip in our brain or whatever the latest technology is going to be. So it’s one of those things, I think we just give it a bit of time and all those questions will be answered and it will be the mainstream, we’ll have moved on to the next thing. But there’s definitely things we can learn from other industries. I can understand why we set a higher bar in the industry we work on, we’re talking about life basically. We don’t want to get that wrong, where possible. But there’s certainly learning to be had. 

[Inaudible audience comment 28:52-54]

I think there’s an absolutely gigantic amount we can learn from the web interface work in general in regards to making things more engaging for participants and making things more usable for participants. There’s decades of work that’s gone into making websites engaging. Largely driven from a marketing point of view, a way of driving people’s interaction with it. Someone mentioned Amazon, they’ve done a huge amount of work about how they make their interface usable but also really engaging so people come back again and again. And that’s something that we as an industry have just complete ignored. We’re bringing in more and more, we have a really good UX team in London, you bring a lot of that expertise and we’re trying to integrate that into our systems as well, but I think there’s a huge amount of learning to to be had from other technologies on various aspects of BYOD and eCOA in general.

All right. Well thank you very much for a great discussion. And I believe it’s lunchtime.

[END AT 29:58] 

Previous
Usability Testing of electronic Patient Reported Outcomes (ePRO)
Usability Testing of electronic Patient Reported Outcomes (ePRO)

<p>Director of Health Outcomes, Paul O'Donohoe, joins Mapi Group's Dr. Catherine Acquadro as co-presenter h...

Next
Patient Compliance, ePRO and the Role of the Caregiver
Patient Compliance, ePRO and the Role of the Caregiver

Brought to your by CRF Health and the TrialMax eCOA Solution - visit www.crfhealth.com for more information