Even with the exciting growth in eCOA (electronic Clinical Outcome Assessments) technology adoption, some study teams still hold on to paper copies as a form of backup.
Deploying paper backups in an eCOA study frequently creates more challenges than it mitigates, resulting in annoyed sites, negatively impacted timelines, increased costs, risks to data quality and ultimately an unhappy study team.
The webinar covers:
- Why regulatory requirements for a paper backup system make the method outdated and inefficient.
- Key advances in electronic data capture and its efficacy in collecting trial data.
- And (most importantly) efficient solutions for those seeking a way out from beneath mountains of paper.
Ladies and gentlemen, you are all very welcome to another CRF Health webinar on what is a rather gray day in London town, I hope it’s a bit nicer in your part of the world. Today we are going to be talking about The Shocking Truth about Paper Backups (and How to Get Rid of Them Once and for All. As you can see our marketing department has had a fun time with the tabloid headline generator. But I think anyone who has any experience with paper backups in eCOA studies probably won’t be too shocked about anything I’m going to present today.
Just to introduce myself, I am Paul O’Donohoe, and I’m the Director of Health Outcomes for CRF Health, based in our London office. And I’m responsible for providing and coordinating scientific support within CRF Health, so supporting my colleagues but then also obviously supporting our clients with any issues that might arise, scientific issues, throughout the course of their clinical trial. And when we say scientific issues, that 90% of the time revolves around questionnaires being used in the study. And one of those issues is occasionally this question of paper backups.
So exactly what I want to share with you today and discuss with you today, we’re going to set the scene with just a bit of an interactive poll. I’m going to be asking you to provide some quick feedback right after this slide. And I’ll quickly cover eCOA in general but then paper backup use specifically. We’ll take a look at some of the current approaches to paper backups, how to include them in a study, an eCOA study. But then also discuss some of the challenges these paper backups can then generate. We’re going to take a quick look at the regulatory concerns and the regulatory attitudes towards paper backups, and the requirements for including paper backups in a study to ensure that they meet potential regulatory questions. I’m also going to cover some potential solutions that overcome the limitations of paper-based systems, so how one can prevent the inclusion of paper backups in your study where possible. And then obviously we’ll be opening it up to Q&A, so there’s a little chat window, you should be able to see it, I think at the bottom left of your screen. If you have any questions, just enter them there, and we’ll do our best to get to them at the end of the presentation. We won’t be keeping you for the whole hour, I imagine. But enter any questions you have and we’ll try and get to them.
So let me see if I can get this poll up. You should be seeing a poll question in front of you right this second, so if you can just click on the screen, for whether you have or continue to use electronic clinical outcome assessments to collect outcomes in your clinical trials. Okay, so looks like—we’re just getting the last few there—looks like the vast majority of you have used electronic clinical outcome systems, which makes the following slide hopefully quite relatable for you. But for the small number who haven’t, I’ll give a bit of background as well, which will hopefully help set the scene for you.
So the next question, for those who answered yes to this initial slide, so for those who have used eCOA or continue to use eCOA, have you ever included a paper backup system for those specific studies? So have you included paper backups within our electronic clinical outcome assessment trials? Okay, interesting results coming in on that. I’m just going to leave that for a little bit longer. Okay, interesting. So almost 50/50. Now that does not match my personal experience in regards to the number of studies we see including paper backups. Getting closer to 50/50 now. That’s very interesting. Exactly 50/50, lovely. So there seems to be quite an interesting mix of people including or not including paper backups in their study, so we might touch on some of the reasons behind that as we work our way through the slides and through the presentation. And if anyone has any interesting experiences or kind of specific reasons why they included paper backups in those clinical trials, I’d be really interested to hear them, if you could enter them into the chat window, why you included paper backups in your study, I’d be interested to see if they match the reasons that we’re hearing here in CRF for why people want to include paper backups. And maybe we can, if we have time at the end, we can talk in a bit more detail about some of those examples that you include.
Okay, well thank you for that bit of interactivity. So just to move forward, I’ll give you a bit of background to exactly what we’re talking about. So obviously the vast majority of people on this call have experience with electronic clinical outcome assessments, but just on the small chance that you’re not familiar with them, clinical outcome assessments, to seriously oversimplify it, are basically questionnaires used in clinical trials to give us an assessment of the patient experience, how they’re feeling and how they’re functioning. Traditionally they would have been administered on paper, so paper questionnaire, but electronic clinical outcome assessments are where we’re administering these on, as the name suggests, on electronic devices. That could be a handheld device, a smartphone of some description, a tablet computer, a web-based system. It can also include things like IVRS telephone-based systems. Digital pens are also included in those systems as well. But it’s very rapidly becoming the mainstream way of collecting outcome assessment data in clinical trials. This is largely being driven, certainly in my opinion, around the benefits that it brings to the data you’re collecting. The ease of use being one key thing. So participants are familiar using touchscreen devices, the vast majority of them and it tends to be an easier system to use than a paper-based system for example, where you’re using a pen and paper. Automatic data checks and reminders tend to drive things like compliance, so you have increased compliance. More data being provided by patients compared to the traditional paper-based system. And all of this then feeds into improved data quality, which, really in my opinion, underlies the fundamental benefits that eCOA brings, it just gives you better data quality. And that drives all kinds of advantages within the study around having smaller sample sizes, having database lock quicker, etc. So it is rapidly becoming the mainstream approach to capturing data that’s supporting any kind of important study endpoint. And while the number of trials that include eCOA, the number of clinical trials that include eCOA, which are capturing the data electronically, is still in the minority, we are obviously predicting quite significant growth over the coming years, so 300% growth we expect in the uptick of eCOA in clinical trials in the next three years. And I think that will just continue, largely driven both by obviously sponsors’ desire to take advantage of this technology, but also regulatory attitudes that really make it clear that they will be expecting certainly primary, key secondary endpoint data to be captured electronically in the future. So it’s an ever increasing trend.
So you’ve decided to capture data on a smartphone device, on a tablet device for example. And as I mentioned as part of that poll, the vast majority of studies don’t actually include paper backup, at least the ones we see in CRF Health. But very often we do have a discussion with clients, where they raise this question of, well maybe we should include paper backups just in case. They might be a bit wary, it might be their first eCOA study, and they use this term “just in case.” And when you dig into that in a bit more detail, I think you kind of find four main reasons underlying this “just in case” desire. I think number one is, they’ve previously had a negative experience with electronic data capture, whether it be broken devices or unreliable devices. And I think we as an industry have to put our hands up and say that, you know, we have had situations in the past with devices that were unreliable or were likely to change, were likely to break. But I think that’s something we as an industry do have to accept is that there has been negative experiences previously with eCOA. We also hear concerns from sponsors about sites being non-compliant because they’re saying they’ll simply refuse to use the electronic data capture system. And similarly, sites also saying that they’re convinced their patients will refuse or will not be able to use the home-based electronic system. And I really quite strongly feel these tend to be driven by experiences with old technology. As I said, I think we do have to recognize that some of the old technology that was used was not ideal, certainly not up to the standard of the technology we’re using nowadays obviously. Technology by its very nature tends to improve and advance very rapidly over time. So I can understand where these concerns are coming from. If a site has an experience where a device continuously broke, they weren’t able to capture the data from participants, I can completely understand why they might be reluctant to use an eCOA system when a sponsor came to them for a new study.
We also occasionally hear this issue of the desire to allow for retrospective data entry, which wasn’t programmed into the system from the very beginning. So for example, we realize sites are maybe not completing the questionnaires when they should be completing them, so we want to extend the data entry window. And so this desire to allow some form of retrospective data entry appears after we’ve already designed the main system. And we’ll talk a bit more in detail about how one might overcome all of these challenges later on in the presentation.
So no matter what the reasoning behind including paper backups in the study are, the three key concerns that I find in my experience in the health outcomes team that seem to be echoed amongst other people within the company, the three concerns that these paper backups tend to generate are how exactly are you going to get that paper data into the electronic database that you have, because you built your system for electronic study, how do you now get that data into the database. Number two, some issues around missing data from the paper backup system. And I’ll talk in a bit more detail about what that means later on. And then this regulatory piece that I’ve already mentioned that we’ll be talking about in later slides.
So to touch on some of the current approaches to paper backups. So you’ve decided, for whatever reason, that you do want to include a paper backup system into your clinical trial. How might you do that?
So some of the ways that we see within CRF Health, just to give you some context. TrialManager is our database system. So you obviously need to have a system, when you decide to include paper backups, to allow the paper data to be now entered into the study database. So obviously in the standard electronic clinical outcome assessments study, which don’t include paper backups, the data is captured on the device, on the handheld device, on the tablet device, and then it’s automatically sent into what we call our TrialManager system, such as the study database for that system, for the electronic system, and then it goes from TrialManager into the overall study EDC system. So that’s an automated flow when you’re using electronic data capture. Obviously, when you’re using paper, there’s now a break in that flow. You’re capturing the data on paper, it’s now not automatically being sent into the system. So there's a few different approaches we can take when you do want to include a paper backup solution. So it can be entered into TrialManager, our database system, using a data clarification form, a DCF. However, this is less than ideal because the system is not designed for data entry, as the name suggests, it’s obviously designed for data clarification, for going in and clarifying exactly what was meant by specific data point, or for example if you’re seeing incorrect or duplicate data points, you might go in and clarify those by using the DCF system. So traditionally the DCF system is not really designed for wholesale data entry. There's other slightly more nuanced approaches you might get, which is entering the data via the devices themselves, so you might program a system for allowing the data to be entered retrospectively from paper in through the handheld device into the tablet device for example. Or you might in fact have a whole stand-alone web system. So you might, as well as the data capturing you’re doing on a handheld device or a tablet device, you might alongside that program a whole separate data entry system just for the paper, which would then allow sites to enter the data off of the paper questionnaires into the web-based system, which would then go into TrialManager, which would then go into the EDC system. Obviously though, this requires the functionality to be implemented up front. You need to know you want to do that, and trying to implement that system in a study that’s already gone live is less than ideal. But all of these solutions really can add to the setup and maintenance cost, and any resourcing you need for your study. The obvious impact is really on the study team members, where traditionally in a traditional eCOA study it’s when the patients are basically acting as their own data entry system because they’re entering it into the tablet device, the handheld device, and then that’s getting sent automatically to the back-end database system, but when it’s captured on paper, that paper then needs to be transcribed into either the EDC system, the DCF system, as I explained here, or the slate, handheld, or web system depending on which approach you take. So there needs to be some kind of dedicated resource and dedicated process provided to allow that data to get off the paper and into your systemsystem . That’s going to add time, that's going to impact resources, and that’s just going to complicate what could have been a relatively clean study. So it’s important to bear these issues in mind when you’re considering using paper within a study.
So what are some of the challenges we see when paper is included in an eCOA study? I think one of the key things that we run into again and again is issues around missing data. So when you complete the questionnaires on the electronic system, on the tablet or handheld system, automatically the system assigns the site number, the subject number, date and time stamps are assigned to it, which visit it was, issues around the actual period or the phase within the study that the data was captured. So all this metadata, if you will, is automatically associated with the data entered by the patient, with the data being entered by the clinician, and you don’t even have to think about it. It’s automatically pulled in and assigned to the data. Obviously, with paper data this has to manually assigned to which site it is, which subject it is, the date and the time the questionnaires were completed. And it’s very easy to forget, for site staff or whoever you have assigned to capture and enter this data, to forget to include it. And there’s a huge amount of data there, they’re trying to provide on paper-based systems, so some of these metadata pieces can be easily overlooked. And so when that happens you’re obviously going to have to go back and query the sites, which adds additional time, adds additional burden. Another key strength of eCOA is that it automates any logic or branching within the questionnaire, and it also automates the administration schedule. So the system knows what study visit the patient is on and thus what questionnaires they need to be completing, and thus it automatically displays the appropriate questionnaire at the appropriate time. You lose all of that when you’re using a paper-based system. You’re relying on the sites to go back and check, well this is visit number 4, they need to complete these two questionnaires but not this questionnaire, in this order. And obviously this is lost when you’re using a paper backup system, and it runs the risk then of either administering a questionnaire you didn’t mean to administer, worse maybe, not administering a questionnaire you wanted to administer, but then also losing all that lovely branching logic you get with electronic systems where, you know, if you answer yes to Question 4, skip to Question 8, all of that is lost, all that automation is lost with paper-based systems. So again, you risk running the risk of either having patients provide data they don’t need to provide or missing questions that they should have provided you answers to, and so you’re going to have missing data and gaps within your database. Somewhat ironically, one of the most challenging things can be in fact the extra data you capture with patients. So again one of the challenges we see with paper-based data capture is people providing first of all out-of-range answers, so not liking the response options they’re provided and so creating their own response option. But then also for example including names, including initials, including additional information on the paper-based system, which obviously can’t be done on an electronic system. And so you need some kind of process for how you’re then going to deal with that additional information. Sites are going to have to strip that data from the system because, obviously, that’s not something the sponsors and vendors want to be seeing. So that tends to be one of the things that are overlooked when we think about paper backups, is there’s all this missing data that you run the risk of having an issue with, but then sometimes we also have an issue of extra data, which is almost as bad.
So somewhat related to the issue of missing data is kind of the more fundamental issue of how sites are getting a hand on the paper-based versions of the questionnaires to use as paper backups. So if you are going to use paper backups there has to be a very clear and detailed and robust way of getting those paper backups in front of the sites when they need them. So they have to be well-designed versions of the questionnaires, they have to be the right version of the questionnaire for your study, they have to clearly and consistently laid out for all sites, and you need to have a way for sites to get their hands on those. Unfortunately, sometimes we’ve seen these cases where for example sites have just taken copies of the questionnaires included in the appendices of a protocol, for example. So they just run off copies from the back of the protocol and administer those. Or potentially even worse, just found copies on the internet that they then printed out. Obviously, this raises all sorts of concerns around, number one, the copyright issues, but also number two, the number of different versions of questionnaires that are very often floating around on the internet. First off, there may just be different versions of different questionnaires, whether it be short form versions, whether it be disease-specific versions of questionnaires. So it’s not always obvious which version of a questionnaire you might want to use in a study. So if you just run onto Google and try and quickly find a version of a questionnaire, you might just print off the wrong version. Beyond that, there’s obviously the whole quality control issue of what’s on the internet. Depending on the instrument owner, they very often don’t have readily available versions of their questionnaire that one can simply download off the internet because there’s licensing that has to be completed. You have to have a license in place to use that questionnaire, after which they will send you the official version of the questionnaires. So any questionnaires that are available online can very often have just been put up there by people who weren’t aware of the fact they were copyrighted, they might have modified them for their own study, they might have changed things that they felt needed to be changed, so it’s not an official version of the questionnaire, so if that can be printed off by sites, and provided to participants to complete, that really does call into question the veracity of the data you’re capturing, because it might be asking different questions, it might be wording the questions differently, so it really threatens the quality of the data you’re capturing on paper. And then obviously we can run into issues of how those reproductions of the questionnaires are made, if they’re bad photocopies for example, if there’s a VAS included on the questionnaire and it’s a bad photocopy, which changes the length of the VAS, does that impact the quality of the data you’re capturing. So something that needs to be borne in mind. If you are including paper backups, how are you getting those in front of the sites, because it can create challenges when they try and create their own paper backups.
Fundamentally I really feel that the kind of takeaway message, and I really feel it's the most important slide of the whole presentation, is that paper and electronic data, it differs in the kind of data you’re capturing. And so when you try and fit one into the other, you run into all kinds of issues around missing data and different data and them just not slotting in easily into each other’s database. And the key challenge we see again and again—and I’ll reiterate this near the end of the presentation—but very often the sponsor is sure that they won’t need to use the paper backup system, it comes back to this “just in case” piece. However, when presented with a choice, we very often see sites will default to the paper-based system just because it’s what they’re most familiar with, for example, or again they’ve had that bad experience ten years ago. The PI had a bad experience with the electronic device and just decided they won’t use it for their study. And so when a site is made aware that paper backups are available, they will just default to using them, even if the system is working perfectly, even if the hardware is working perfectly, it will just be the default choice. And once sites have defaulted to a paper backup system it can be extremely challenging to get them back onto the electronic system, because it’s what they’re comfortable with, it’s what they know.
But we’ll see a few more slides about how one can hopefully overcome some of these issues when a site does decide to default. Let’s take a look at some of the regulatory considerations around using paper backups. I think the key takeaway message really is that it certainly introduces some regulatory risk. Because of this issue around mixing modes and demonstrating equivalence. And in the following slides I’ll go into exactly what that means in a bit more detail. But really, we find that deploying the paper backups in the study frequently creates more challenges than it mitigates. And I hope that’s kind of the message I will leave you with at the end of this presentation, that any benefit it might bring to the study is very often outweighed by some of the burden it can create within the study. It can annoy sites, it can impact timelines negatively, it can increase costs because of all this, and it can fundamentally risk the data quality, which I think is a key challenge we see with paper backups.
The regulators haven’t explicitly provided any guidance when it comes to integrating paper backups into a clinical trial. From various discussions we’ve had with them, they’re definitely open to talking about it, as long as one can provide a good rationale for why you want to include paper backups within the study and as well as present a clear plan for how this data will be captured and analyzed. If you can do that, if you can provide that rationale, and if you can provide that clear plan, then—and it really is only going to make up a small percentage of your study, so for example, there's a couple of specific sites where it’s absolutely impossible to get—they have no internet connection, it’s absolutely impossible to get mobile connection, for example, there's two sites in your global study that have that issue. Then maybe there’d be scope for discussing paper backups with the regulators.
The key thing is that the data arising from paper backups needs to be tagged as such, so you need to highlight the fact it’s coming from a paper source, which gives you the functionality to distinguish the different sources of that data once you’re doing your analysis later on, to really demonstrate that there's no statistical differences between the modes of administration. So you want to demonstrate that there’s no systematic bias being seen within how patients are responding to the paper data versus how patients are responding to the electronic data, because ultimately you want to be merging and comparing all this data. If there's systematic biases within that, you obviously want to be very aware, and if these differences do show up, it can create issues with the interpretability of the data within your study. I will say that the more data is captured on paper, as well as the relative importance of that data—so you know, Phase III primary endpoint or safety data or risk-based monitoring data—then really the greater the potential for regulatory scrutiny, the greater you’re opening yourself for that potential for the regulators to say, well we need to see a really good quality plan in place for how you’re dealing with this data, but more importantly how you’re going to demonstrate that there's no differences between that data. So the FDA has explicitly mentioned paper-based data versus electronic data within the 2009 PRO guidance, which I’m sure the vast majority of you are familiar with. But this is where this term mixed modes come from, where they’ve said data collection methods can include paper-based, computer-assisted (so eCOA), and telephone-based assessments, and “we intend to review the comparability of data obtained when using multiple data collection methods or administration modes within a single trial to determine whether treatment effect varies by method or mode.” So very explicitly saying here, if you intend to mix modes of administration within a clinical trial, we are likely to take a look at that to make sure there’s no systematic bias within that.
As per the usual FDA approach, they didn’t actually tell you how one could go about doing such a thing as demonstrating that there was no systematic bias, but this is where another ISPOR task force paper comes to the fore. This one specifically the Mixed Modes Task Force, which really describes in very great detail but is very accessible and I strongly recommend anyone who is interested in this particular topic of how one could go about mixing modes within a clinical trial, I strongly recommend you go and take a look at that paper, because it’s very accessible, very detailed, and really kind of outlines how one can potentially mitigate some of these concerns. But to give you a very very top-level idea of how one might go about successfully integrating multiple modes within a clinical trial, obviously you start by identifying which is the appropriate mode for your trial. We at CRF Health obviously believe that that should be eCOA. But you might have a discussion around whether you should be using a handheld or a tablet system, for example, to capture that data. Depending on the specifics of the study, where you’re capturing the data, are patients going to be providing data at home or just at the site or some combination thereof, there is a discussion to be had about what is the appropriate mode for your specific trial. And then this is when the discussion around paper backup should be had as well. Are we going to include some form of paper backup. Firstly why. Is there actually a reason to include paper backups? But if so, how in fact that would be included in the study. You then need to perform a faithful migration, so that’s basically ensuring that your implementation across all the different modes is maintaining the fundamental properties of the initial version of the questionnaire. So traditionally these questionnaires are developed on paper because that obviously has been the most common technology. So you want to make sure you don't lose any of those fundamental properties when you go from paper to electronic. But most importantly this work needs to be done before you mix within the study so that you have well-established implementation before you go live with these questionnaires.
Then you need to look at the equivalence between the modes that are being mixed, and I happily skip over the details of this by saying, use the appropriate study design, so there’s various ways one can compare the equivalence between different modes of data capture, using various crossover designs for example, so having participants use one mode, giving them long enough to forget how they answered the first time, and then having them use another mode and statistically comparing them. There’s a number of different ways that one can go about demonstrating that equivalence, which I’m not going to go into any significant detail about in this presentation, but happy to talk to anyone after the webinar in more detail about that if they’d like to discuss. And then obviously assuming you’ve demonstrated that equivalence, you can implement the modes in the clinical trial. So you can go forward and use those mixed modes within the study.
A couple of key points is, when it comes to diaries, so completing a questionnaire for example every morning and every evening in an unsupervised setting, really using a paper-based version for that is strongly strongly recommended against, because of this issue of not knowing with a paper-based version of a diary when participants completed the questionnaire. So with eCOA systems you have time stamping, it’s automatically indicated when the participant completed, for example, a morning diary. With paper-based systems we’ve seen again and again, when participants are given paper-based versions of diaries that they should be completing, for example, every morning, typically those data entry windows are not met, and participants for example fill in the next week’s data on a Monday or they fill in all of their previous data just before they go in for a study visit, for example. So when it comes to diaries, it’s strongly recommended not use any kind of paper-based system.
As I said, that paper goes into much more detail about those various considerations, and I recommend you take a look at that if it is something you’re interested in.
Assuming again that you have decided to include paper backups within the study, just some things to bear in mind to try and ensure that you can allow your data to meet requirements for potential future regulatory scrutiny. You really need to be able to identify who is entering the data. So obviously within eCOA systems, there is specific accounts that are created for each user, and that’s associated with a PIN for every user, so you know exactly who is entering the data. With paper you don’t have that level of scrutiny. So there needs to be some way of ensuring and recording who exactly is entering the data, which subject that data is associated with, so again this hearkens back to that slide I showed where, with eCOA, you’re automatically getting that subject number, but you have to have some ways ensuring you’re capturing that with the paper-based system. And you also need to know the original device that this was coming from, so the device that was used within the study, that has now been replaced by the paper-based system. And wherever the data is retrospective, it needs to be very transparent within the system that this isn’t real-time data, this is retrospective data, again opening up the possibility of analyzing any differences you’re seeing between data that’s been captured in real time versus data that’s been provided retrospectively, because there can be some systematic biases within that data.
I think one of the key things to remember is that if data is transcribed retrospectively from another source, then the retrospectively entered data is no longer the source data. So when we’re using eCOA studies, when we’re using devices to capture patient data or clinician data, the source data is that data that’s entered into the device. However, when you use a paper-based system, this paper now becomes the source data. Even if you go on to enter that into the device, the paper is still the source data. So when source data verification takes place, whether it’s by the sponsor or CRA or whoever is doing that, the original data needs to be verified, not just that which was put into the eCOA system, and the original data is the paper-based data, and obviously all those systems that traditionally we’ve seen with paper studies need to be in place for storing that data, the site needs to store it in a safe place for a specific period of time.
I think an additional challenge related to this is verifying that the SDV, the source data verification, has taken place. So obviously this is typically integrated into the EDC system, but with a paper backup that you’ve just added into an eCOA study—and to reiterate with an eCOA study, the data entered into the devices is the source data, but with a paper backup system, the paper, the physical paper becomes the source data—you need to now have some kind of plan in place for how you’re going to go and verify that data system, or how you’re going to verify that paper data system. And you’re going to need a way of tracking whether that has been done as well, which an create an additional challenge for integrating paper backup systems into any electronic study.
So I’ve talked a lot about some of the challenges that paper backups within an eCOA create. But what are some of the things we can do to hopefully remove the need for a paper backup. As I said at the start, it’s something we see in the minority of studies, there’s a very small number of studies that we run in CRF Health where we see paper backups included. Obviously that differs slightly to the experience of people on the phone, very interesting to see that was a 50/50 break in regards to people who have run eCOA studies who have also included paper backups. But what is there that we can do even in that small number we’re seeing in CRF, to try and remove the need to include paper backups within an actual study.
So as I said at the very beginning, the concerns really seem to be driven by issues around device breakages, concerns around device breakages, issues focused on site behaviour and then issues focused on the perceived patent behaviour, not necessarily the actual behaviour but their perceived patient behaviour. And I want to be very clear, I’m not dismissing these concerns, not at all. I think they are all valid concerns that need to be addressed. But we really do find they don’t tend to be borne out in modern eCOA studies. I’ll go into a bit more detail in each of those individually.
So device breakages tends to be one of the biggest reasons underlying the desire to include a paper backup into an eCOA study. And again I would argue this does tend to hearken back to, or at least be largely driven by, experience with older technology. I think one of the key best ways of overcoming concerns around device breakages is simply to get data from your eCOA vendor around what are the typical breakage statistics for this device I’m using in this study. So across all devices we use within CRF Health we typically see breakages of less than 3% within a study. So it’s a very small number of devices within your entire study which might break. And again, this is going to vary by the length of the study and by the specific devices you’re using and the number of sites you have involved, but it’s a very small percentage overall of all the devices that you’ll have out in the field, that might run into an issue, which should really have one questioning the value of including a paper backup system just for these small number of potential broken devices. Even if you do have broken devices within a study, then we shouldn’t just be automatically going to a paper backup system. There’s ways to very easily overcome the challenges of broken devices. So I think a key thing is really encouraging the sites to contact the help desk for technical support. So as soon as the sites realize there’s an issue with the device, then they need to contact the help desk, because in fact very often we find that a site that thinks they have a broken device in fact have a device that can be easily fixed over the phone. Consistently we run into issues of devices having lost their charge, for example, and sites thinking that the device is now broken. This is something help desks can help sites through, of getting a device back up and running. So rather than immediately defaulting to a paper backup system they should be encouraged to contact the help desk, who can hopefully address their concerns over the phone and get them back on the electronic system straightaway. If the help desk isn’t able to help, then the sites need to be encouraged to get the non-functioning device back to the vendor as quickly as possible because this then helps us assess whether there is a systematic issue we might want to address throughout the study, not necessarily just with the device, but with what we’re asking sites to do with the device for example. So getting, as I said, sites to send back those devices as quickly as possible is very very important, because it gives a way of assessing the seriousness of the issue and if there is something we need to do to avoid this becoming more widespread throughout the study.
And that’s a very simple way of overcoming challenges with broken devices, simply providing an additional device at the site. You might have more than one device at any site anyway, depending on the number of patients you’re expecting to come in, depending on the number of data captures, etc. But as a way of almost reassuring sites, making an additional device readily available on the site can also help calm any concerns they have about potential device breakage for example. We also can provide reassurances around getting sites new devices or replacement devices as quickly as possible. We can make guarantees about shipping devices within a certain period of time once a site tells us that they have a broken device, for example. So again there’s ways of overcoming those issues of broken devices, we don’t just have to automatically default to using a paper-based system. But I would encourage you to talk to your eCOA vendors about typical device breakage statistics if that’s something you are concerned about.
I’m talking a lot about the sites, and I don’t mean to pick on sites. As I said, these are all very valid concerns, they all do have to be addressed. And the sites really are fundamental to the study, so we obviously want to make sure they’re completely comfortable using the system. And really the key way of overcoming this, it’s not really any surprise, but really good quality hands-on training goes so far towards reassuring and addressing any concerns the site might have. An investigator meeting is key of getting the devices in the hands of the site staff, so ensuring that they actually get to see how intuitive and how robust and how user friendly these devices are. This is one of the key places where we can really help get sites on board with the eCOA system is at the investigator meeting where we can actually get them into their hands. Once we get their buy-in at that point, once they feel comfortable with the system at that point, you still need to provide support throughout the study, particularly if new site members and new site staff are coming on board. But if they have that buy-in already, that just makes things so much easier throughout the whole study. So I really want to emphasize that good quality hands-on training is key.
I think another important consideration is not allowing sites to just default to paper, not allowing sites to decide, okay I can’t turn on this device, I’m just going to print off a version of the questionnaire from the back of the protocol and administer it to this participant. Sites should have to explicitly ask permission to use a paper backup system This helps you have a sense of how many sites are defaulting to paper, which can help flag up some kind of issue in the field, which we can then hopefully mitigate. But it also just stops this whole idea of sites hitting some kind of issue but that would be easily addressed if they for example called the help desk, because they have to actively now reach out and say, we are having this issue, we want to use paper, and hopefully that gives us the possibility of addressing that issue early on and preventing it becoming a road blocker for capturing the data electronically.
Relating to this is you really need to be actively monitoring paper backup use within the sites. So you want to know when sites are using paper backups and you want to know how long and how much data they’re capturing on paper. And you want to get them back on eCOA as quickly as possible. So having a good way of monitoring the sites, keeping an eye on the sites, to see how much data they are capturing on paper and getting them back in the electronic system as quickly as possible is key.
And I mentioned this issue around retrospective data entry, so for example sites are forgetting to administer one of the questionnaires at site visit 2, but we still want to capture that data so we want to capture it maybe retrospectively. If you see that within your study, ideally that should be proactively built into the system. You really want to know about this early on so that you can build some kind of retrospective, good quality retrospective data entry system into the overall eCOA system rather than trying to crowbar a paper entry system after you’ve gone live. It’s when studies have gone live and then try to add paper backups that we see the majority of these issues arise, rather than if this very actively thought about early on. And these are things to talk about with your vendors right up front at the very start of the study.
And then, this issue of perceived patient unwillingness, that we sometimes hear, both from sponsors but then also from sites that you know, this patient population won’t be able to use the system. It’s a conversation I’ve had more than once with sponsors saying, I think eCOA is great and definitely in the future, but my patient population isn’t going to be able to use it. And I think I’ve probably had that conversation about pretty much every patient population we’ve successfully deployed eCOA in in CRF. But again, not to dismiss the concerns, if patients are uncomfortable with the system, they shouldn’t be burdened with using it, and it comes back again to training. It’s not an exciting answer, but it really is so important that good quality training is provided to patients to ensure that they’re comfortable using the system. And this feeds back to having the sites have good quality training because they’re the ones traditionally providing the devices to the patient. If they don’t feel comfortable with the system, this is going to be transmitted to the patient, the patient is going to be aware that the sites don’t like the system, and they’re going to be immediately biased against it. So if the sites are comfortable with the system, they will in turn make the patients comfortable with the system. So it all comes back to this training at the sites really, just ensuring the sites are able to use the system easily and then ensuring the patients receive that training from the sites and are sent home feeling very very comfortable with the system. Again you can ask your vendor for statistics in regards to the typical populations they’re deploying eCOA in. And I think compliance statistics can be very eye opening in this regard, if you have very high compliance in a study with a specific population, that should indicate that patients are comfortable using the system and are able to use the system provided to that data. We consistently see very very high compliance, 90% plus, across therapeutic areas and across age groups within clinical trials. So we really don’t feel there’s any population that would struggle significantly—at least would struggle more than they would for example if they were to use paper—to use the electronic system.
And we very often hear patients, once they’ve had a chance to use the system, would prefer to use the electronic system over the paper-based system There’s very few gold standard studies out there that have looked into this. We’ve, certainly in all the usability studies we’ve done, explored this particular issue and we consistently hear either no particular preference, or they wouldn’t mind using either/or, or they prefer the electronic system. But there is a paper from 2006, which tried to look at this issue around preference of paper versus electronic and found that 59% of patients expressed a preference for electronic compared to just 19% for paper. And I think, considering that’s basically ten years old, as technology has increasingly integrated into society I wouldn’t be surprised to see that number being significantly higher.
So we need to be very mindful of patient concerns around using technology, but these concerns can be addressed by developing an intuitive and user friendly system from the very beginning and that’s done by getting input from usability experts testing the system with patients, for example. But then also ensuring the sites are comfortable with the system, because as they’re the ones that are providing the electronic device to the patient, if they’re comfortable it will then translate into the patient comfort.
So in case I wan’t clear by all of those examples I provided, we don’t recommend paper backups. We strongly feel that it can add significant complication. And this complication could impact timelines, which obviously then impacts cost. But then the key thing, which I feel is sometimes overlooked, is that it can impact the quality of the data you’re actually capturing. So you might feel like you’re plugging in a hole for potentially missed data. I’d first of all say, we don’t feel that those issues will arise in practicality of the study. And we can provide statistics and provide guidance around that for your specific study. But then, if you do still include it, often the paper data you capture is of such low quality that it doesn’t really help anyway. That’s a fundamental thing to bear in mind with paper backups, is that the quality of the data is just not comparable to the electronic data you might capture.
If you are, though, going to include paper backups within a study, you obviously need to keep in mind the registry positions, that you need to demonstrate that the data is comparable, and you need to build the system proactively in such a way that that comparison can be made, so that you can, at the end of the study, merge and compare the data across the two different systems, assuming you have two systems within your study, so that you can then go to the regulators and say we didn’t do paper backups but the data is comparable and mergeable across all the different modes we used. So that’s the takeaway message. Please don’t do it, but if you do, talk to us early and be very careful about it.
So we’re going to open this up to a Q&A, but first of all, this is now an anonymous poll, the results of this one won’t be published, but we just want to know, if you respond yes to this, whether you want to be contacted about using eCOA in clinical trials, then we’ll be able to follow up with you after the presentation. So this is just an easy way for you to let us know if you want us to be in touch with you. So I’m going to leave that up on the screen while I take a look at some of the questions that have come in.
[Q&A section starts at 51:05]