Migrating existing paper-based instruments to an electronic platform can seem daunting. Understanding the process, recognizing where to begin, and knowing what to look for in an eCOA vendor are hurdles you may experience when preparing to make the switch.
Avoid regulatory risks, patient noncompliance, and technical issues by ensuring that your migration from paper to eCOA goes smoothly. In this webinar, learn how to conquer the migration process and fully equip yourself with the best practices, tips, and key considerations for an easy migration. We'll also discuss:
- Electronic clinical outcome assessments (eCOA) and how they compare to paper
- How to create high quality electronic versions of existing paper questionnaires
- 6 Considerations for migrating from paper to electronic
Hello and thank you to everyone who’s joining us for today’s webinar, 6 Key Considerations for Complete Paper to eCOA Migration. We are welcoming today presenter Paul O’Donohoe, who is the Director of Health Outcomes here at CRF Health. He’s responsible for developing the company’s internal health outcomes expertise and supporting clients on a range of scientific issues throughout the course of a clinical trial. There’s a little bit of background on Paul up on your screen.
And then today’s webinar agenda. Today we’re going to be discussing electronic clinical assessments and how they compare to paper. We’ll also review how to create high quality electronic versions of existing paper questionnaires, and then we’ll dive into six key considerations for migrating from paper to electronic. So without further ado, Paul, you may begin.
Lovely. Thank you kindly, Jackie, and to echo Jackie’s welcome, thank you everyone for taking the time to join us today on what’s a rather blustery day in London. I hope it’s not too miserable in your part of the world. And as Jackie said, we’re going to be really focusing today on migration, migration largely of paper-based questionnaires onto electronic platforms. And I’m hoping most people on the call today don’t need to be convinced of the importance of eCOA, the importance of the electronic version of clinical outcome assessments. They’re rapidly becoming the preferred and mainstream method of capturing clinical outcome data in clinical trials. Preferred both by patients but also by study teams and regulators. And this is really driven by a host of benefits. Edit checks eliminate missed or illegible or out-of-range data. There’s increased site and patient compliance through the use of reminders. Data becomes available much more quickly, obviously, through the fact that the system tends to be online. And just overall, it feeds into improved data quality. That’s kind of the fundamental benefit that electronic data capture boils down to is really this issue of data quality and the fact you get better data quality with electronic capture of these questionnaires.
There’s obviously a host of different modalities. We’re going to be focusing on screen-based modalities today obviously. There are systems such as IVR, interactive voice response systems, that are audio-based systems. I’m really going to be focusing on screen-based systems, so things like obviously PC or laptops, things like tablet computers and hand-held devices. And everything I’m going to talk about will really apply both to traditional provisioned modes of data capture—so provisioned being you’re providing patients or sites within the clinical trial a device on which the software of the questionnaire is loaded, typically locked down, so the only thing that the patient or site are able to interact with is the actual questionnaire. This is versus this buzzphrase, which I’m sure many of you are familiar with, bring your own device, BYOD, which is the idea of taking advantage of the patient’s own hardware by for example providing a web-based system or an app-based version of the questionnaire they would download onto their device and interact in that manner. Everything we’re going to talk about can pretty much apply equally to that traditional provisioned model and the newer BYOD model as well.
So despite this increase uptick we’re seeing in electronic data capture in clinical trials, the technology has really only started to come into its own in the last few years, realistically really only in the last ten years. Obviously the traditional incumbent technology of the previous century was paper. And this means that the vast majority of questionnaires that exist, the vast majority of questionnaires that have already been developed and are traditionally used in clinical trials, were developed on paper. That was the technology of the time and so that is the technology these questionnaires were developed on. And obviously there’s a few fundamental differences between paper and screen-based questionnaires.
So the question really is, where should study teams start if they’re ready to make the switch from paper to eCOA. So if you have paper questionnaires, and you recognize the benefits that electronic data capture brings, what are the considerations you need to make for moving into a more electronic world.
And we’ve kind of broken this down slightly artificially into six considerations. There’s a lot of overlap between what I’m going to be talking about here, so some considerations tend to bleed into more than one area. But they kind of roughly naturally fell into six main considerations, those being regulatory, which obviously a key starting point when one is considering how best to capture data in a clinical trial. But then we’re also going to talk a bit about instrument author engagement and considerations for licensing questionnaires, localization, patient usability, and design. We’re going to talk briefly about all of these particular aspects, but again just reiterating that there is a lot of overlap between these categories and they will tend to bleed into each other.
So to start with regulatory, which is often one of the main concerns for particularly sponsors, when I’m talking to sponsor about the regulatory acceptability of eCOA, electronic data capture. As I said at the beginning, it really is starting to be the preferred mode of data capture for a whole host of stakeholders, but also the regulators, despite the fact that they haven’t necessarily come out and explicitly suggested that electronic data capture is their preferred mode of data capture, with the possible exception of a recent FDA guidance on ulcerative colitis, which did reference the fact that using electronic diaries was probably to be preferred. But more broadly, their preference and attitudes towards eCOA is really driven by the ALCOA standards, the fact that they are looking for data to be attributable, legible, contemporaneous, original, and accurate. And that requirement for what they want in data applies to all modes of data capture. But really when one digs into it, the mode of data capture that allows the most control over those five categories—attributable, legible, contemporaneous, original, accurate—is really electronic data capture, and that’s recognized within the regulators. So they really are a fan of electric data capture. So kind of moving beyond that, moving beyond the question of the acceptability from a regulatory point of view of eCOA, what guidance have the regulators provided when it comes to migrating from a paper-based questionnaire onto an electronic platform. Well as with most things in relations to the regulators and their thoughts on COA and patient reported outcomes in general, obviously we look to the 2009 guidance for industry. And within that guidance they do make a brief reference to electronic capture. The guidance kind of applies as modality agnostic but they do make a brief mention in regards to modifications to an existing instrument. So if you modify an existing instrument the regulators suggest to you to demonstrate that the instrument is still adequate, so still fit for purpose. And they actually explicitly come out and say that modification could include a move from paper to electronic. So you have within this guidance the regulators saying, if you move from paper to electronic you might need to demonstrate that this “new” version of the questionnaire is still adequate. It doesn’t necessarily tell one how to go about doing that, but before we kind of dig into a bit more detail about what one might need to consider to demonstrate this adequately, I wanted to open this up to a poll, for when people on the line felt it is required to demonstrate equivalence between paper and electronic versions of questionnaires.
So if you want to click on all that apply on your screen in regards to when you feel it’s required to demonstrate equivalence between paper and electronic versions of questionnaires. There’s a few different options in regards to response choices. Do you think it’s required when going from paper to electronic versions of questionnaires in general. Do you think it’s required when an instrument owner requires it. When a questionnaire is being used electronically in a Phase III clinical trial. When you’re maybe moving between electronic devices, so you have a validated questionnaire but you’re moving to a new electronic device for example. Or you’re introducing some kind of changes to the questionnaire, such as changing the wording from “circle” to “select.” So when do you feel this equivalence testing is required?
Okay, so results coming in there. Let me skip to the results. And obviously anyone who is familiar with this area will probably recognize that that was really an unfairly trick question, because in the favourite phrase of outcomes people everywhere who work in this field, it really depends. It really depends on the specifics of the situation. And I personally would say the only situation of those options offered there where it really is required is when an instrument owner demands it. An instrument owner is insisting, you have to do some kind of equivalence testing, then you don’t have much choice. If you actually want to use that questionnaire in your clinical trial, then you need to meet their requirements for using it. All the other options, I would claim, are optional.
And really it comes down to we are only really from a regulatory requirement point of view required to demonstrate equivalence, or at least it’s best practice and it's what we recommend to our sponsors, that you demonstrate equivalence when questionnaires are supporting a label claim. So when a clinical outcome assessment and patient reported outcome is capturing data that’s going to be submitted to the regulators in support of a label claim. At that point I would be flagging it for a sponsor that you really need to be considering this equivalence testing piece. All those other situations, there’s definitely an argument that maybe you might in certain situations want to do the equivalence testing. But in regards to whether it’s regulatory required, it’s really for this piece when the questionnaires are supporting label claims. And I think that’s something that creates some confusion within the industry, this perception that there’s a broad requirement to always demonstrate equivalence when one is using an electronic version of a questionnaire. I just want to really flag this up as not the case.
So I mentioned that the FDA flagged up that you might need to demonstrate that an instrument is still adequate. And as per usual they don’t actually tell you how to go about doing that. But the ISPOR ePRO Good Research Practice Task Force Paper came out shortly after the PRO guidance to really try and answer this question for how one does demonstrate equivalence between paper and electronic versions of questionnaires if needed. And their key message was that you really want to be ensuring that the data you capture on the electronic version of a questionnaire is equivalent or ideally superior—so has higher reliability—than the original version of the questionnaire. You obviously don’t want to be making implementation that’s capturing bad quality data, kind of undermining the whole point of what we’re trying to do. And so your new version of the questionnaire ideally, at minimum, is capturing data that is equivalent quality. But ideally you want superior quality, as we touched on at the very beginning. eCOA allows you to capture superior quality data because of these things like edit checks and time stamps, etc. And depending on the specifics of your study, largely focused on this piece around when the data is being used to support a label claim, and you might need to demonstrate with additional evidence to prove the adequacy between the different versions of the questionnaire.
This is the famous table from that paper, Table 1 within the paper, that tries to define the different levels of change that might occur when you go from a paper version of a questionnaire to an electronic version of a questionnaire. They define three levels of change that could occur: minor, moderate, and substantial. We largely ignore the bottom level, the substantial levels of change, but basically where you’re ending up with whole new questionnaire you have to go back to the beginning again to really demonstrate the psychometric properties of this questionnaire, because you’ve made so many changes.
Typically we’re really focusing on the first and second level, the minor and moderate levels of change. And in reality, the vast majority of what we do falls within that minor level of change, 99% of the equivalence studies I’ve been involved in at CRF Health have always fallen within that minor level of change. Sometimes we do see instrument authors requiring—you consider the level of change more moderate level of change where you’ll be demonstrating the statistical equivalence piece. And again if that’s a requirement for using a questionnaire, then that’s a requirement you're going to have to meet. But the vast majority of the time we’re dealing with that minor level of change.
So what does one do when you fall within that minor level of change? Well the ISPOR task force paper suggests you need to consider doing usability testing and cognitive debriefing. And really the fundamental aim of those two tests, which would typically be carried out at the same time with the same patient, so you’d roll it into a single interview with a patient for example. Typically you’re looking at five to ten representative patients, the patients that represent those that would be using the questionnaire on the device in your actual clinical trial. And what you’re really trying to do is first of all demonstrate that the patient can actually use the hardware and software in the way that’s intended. Can they interact with the hardware and software, can they turn it on, can they log in, can they enter their answers, can they navigate from screen to screen. And the second bit of that, the more cognitive debriefing focused piece is really focused on whether they interpret and respond to the questions in the same way on paper as they do on the electronic versions. So you’re really looking at the conceptual equivalence to demonstrate that patients aren’t interpreting things differently because the questionnaire is being administered on a new modality. And obviously you want to find that the patients aren’t interpreting it differently, that they will respond in the same way on an electronic platform as they would on the paper-based platform.
So that’s just a high-level look at the regulatory considerations when one is going from paper to electronic. So let’s move on to consider the instrument author engagement. And this is something that I commonly see actually forgotten about until really the very last minute. In the rush to kind of finalize the protocol and get questionnaires defined within the protocol, it’s very often forgotten that engaging with the owners of these questionnaires at an early stage can be of great benefit, particularly because some instrument owners actually have pre-defined requirements for using their questionnaire electronically. Two of the most commonly cited examples are the EQ-5D and the SF-36. So EuroQol and Upton have specific versions of the questionnaire for deployment on different electronic platforms. And so you need to ensure that you get their feedback on exactly how one implements those questionnaires electronically. You can’t just take the paper-based version of the questionnaire and implement it on an electronic platform. They have actual specific versions of the questionnaire to be deployed on electronic platforms. This is the exception, I will say. It’s definitely not the rule. And very often instrument owners, to be honest, haven’t even considered how best to implement their questionnaires electronically. And sometimes you even need to take the time to kind of work with instrument owners to make them comfortable with what you’re looking to do with their questionnaire, to really share with them the benefits that electronic data capture brings, and really reassure them as well about the fact that you’re not going to harm the behaviour of their questionnaire, that assuming you’re following some of the best practices that we’re outlining in this presentation, you’re really maintaining that quality of their questionnaire, you’re not jeopardizing the quality of the data you’re going to be capturing. The earlier you can start this process the better. Very often a lot of these instrument owners are academics, so they tend to work on a different timeline to the industry. And so, just bearing this in mind when you’re identifying questionnaires for your clinical trial, that sometimes they have specific electronic versions. And sometimes you do need to provide a certain amount of additional reassurance to instrument owners around how the questionnaire is going to be used and implemented electronically.
Very closely related to this is obviously the topic of licensing, and again this is sometimes something you often see left to the very last minute and not really even considered sometimes to be honest, that a lot of these questionnaires, these valid questionnaires that already exist, have a copyright holder. And as such, you need to obtain permission to actually use the questionnaire in a clinical trial. Sometimes that requires nothing more than dropping someone an email, but very often involves signing a license and paying a fee of some description. Again, this is something we always advise that one start as early as possible. It’s also during this licensing process you might get insight into any requirements that are required by the instrument owner for deployment electronically, whether it’s doing some level of equivalence testing or whether it’s something like a screening process. So again that’s something else. EuroQol and Upton require using their questionnaires. They like to do a screen review of all the translated screens before you go live. That’s something you’ll discover during the licensing process. So the sooner one can complete this process the sooner one can begin the process of migrating from paper to electronic, as well as the sooner one gets insight into any additional requirements that might be required.
This licensing process can sometimes be a bit challenging and confusing, so it’s definitely something your vendor should be able to support you either directly or via a third party.
So we’ve talked to our instrument owner, we’ve got our licenses in place, we know that we now have permission to migrate this questionnaire onto an electronic platform. But an additional point for consideration is localization.
We’re typically running very large globalized studies, and for these clinical trials it’s very rare that we’re just doing a study in a single country. And so we need to generate translated versions of the questionnaire as well. And again, there’s a best practice methodology for translating these questionnaires so that you’re not impacting the quality and you’re not losing any of those important wording considerations from the original questionnaire. And so the ISPOR best practice guidelines really focus on ensuring the clarity, the cultural relevance, the comprehensiveness, and really the key point, maintaining the original concepts of interest of your questionnaire when you're translating. And this applies irrespective of modality, that you need to follow this best practice for translating a validated questionnaire. Otherwise you risk your data not being accepted by the regulators.
This can be quite a timely process depending on the specifics obviously of the language and the questionnaire being translated. We talk about ballparks of 8 to 20 weeks, depending. Obviously that's a hefty amount of time and so you want to work closely with your eCOA provider to be sure that’s built into the timelines for the study. Typically we stagger our work on specific languages, so we’ll focus on languages that have an IRB submission date earlier rather than later obviously, and so that allows us to spread out the work of getting screens localized and ensure that we have localized screens in place when they’re needed for a specific IRB submission. So again, not something that necessarily has to be worried about in regards to the specifics of running this process because there’s vendors out there who specialize in doing this work. But really just flagging it up for consideration for the potential impact it can have on timeline. But I’m thinking about this as early as possible, once you identify the questionnaires you want to use in your study and making sure you understand which countries and which languages you’re going to be deploying, so you can start working on localization process as soon as possible.
The next consideration I just wanted to talk about briefly was the patient usability point of view. This, when it comes to electronic data capture, is something that comes up more so than when one is discussing paper versions of questionnaires, which is something I find particularly fascinating to be honest.
Paper completion of questionnaires, from a usability point of view, is not really something people consider. In fact, when people sometimes hear about eCOA for the first time, their automatic assumption is that it’s going to be challenging for patients, depending on the patient population of course. But elderly are often the ones that are flagged as people having real concerns about whether they’re going to be able to use the system. And it’s interesting—I personally find it very interesting that we never had that conversation about paper versions of the questionnaires when we’re asking people to manipulate a pen or pencil, for example, which for certain populations can be extremely challenging as versus interacting with a touch screen where one can use one’s fingers, one’s knuckles for example, just to lightly touch a screen and provide your response, which depending on the situation could be far easier for people compared to a paper and pencil implementation. But still we need to bear usability in mind. Patients, or end users, are our ultimate customers. And if they’re not comfortable using the system then it’s going to impact obviously their experience. We want to reduce burden on patients as much as possible. But also risks impacting the quality of our data. Things like diabetes-related vision problems, I think we can all understand why that might have an impact from a usability point of view, we might need to look at increasing the font size of our implementation. And there’s also considerations around things like Parkinson’s where sometimes patients have tremors which might make it very tricky for them to hit a very small point on a screen, for example, so you might need to consider making buttons much larger in that particular patient population.
Other considerations that are kind of independent of modality are things like length of the questionnaire, and the length of time it’s going to complete. This is really something that needs to be considered in the protocol stage, what is it we’re asking patients to do here. Is this a large burden on patients in regards to the amount of time and the amount of effort we’re asking them to put in, and you do very often see protocols which throw anything and everything at the patient in the hope that something will stick, which is really not the preferred approach. You want to be very targeted in the data you’re capturing, in trying to understand exactly what the question is you’re trying to answer and structuring your patient reported outcome strategy around that so you can be as focused as possible and really keep to an absolute minimum the amount that you’re asking patients to do. Obviously there’s also considerations around where the data collection is actually going to happen. It’s very different asking a patient to complete a questionnaire every few months at site when they’re surrounded by site staff and clinical nurses for example versus having a patient at home completing a morning diary every morning for six months where they mightn’t have that support. And so making those kind of systems as intuitive as possible so that it guides them through exactly what they need to do. The key guiding tenet for usability that I hear my usability colleagues talking about is, don’t make the patient think in any way whatsoever, they should just flow through the interaction with the device and the questionnaire so they can really focus on providing information on how they’re feeling and such.
So very closely related obviously to that usability point of view is that design piece, how we design the questionnaires that we take from an A4 piece of paper, for example, and move them onto a hand-held device or a tablet device. And there’s probably three key considerations when one is designing an electronic version of a questionnaire. And that comes down to wording, the wording that you’re using in the questionnaire. The fundamental issue of layout and space, how it’s actually placed on the screen, how the questionnaire is placed upon the screen, and how you take advantage of the space you have. And then this quite specific consideration around scaling of graphics and other elements that you might see in the questionnaire.
And so this wording issue is something that comes up quite commonly and is a hangover from the paper days when instructional text might use very paper-specific wording. So it’s very common to see things like “please circle” or “please tick” or “please make a mark in the box” etc. And this is obviously appropriate on a paper version of a questionnaire but doesn’t necessarily make sense in a touch-screen world. And so it’s worth reviewing the questionnaires you’re going to be implementing if there isn’t an electronic specific version such as the EQ-5D and the SF-36 already referenced or if it hasn’t been designed in such a way to be modality agnostic, and this is something we’re seeing more and more with questionnaires being developed nowadays, that they’re designed to be as agnostic of modality as possible in regards to their wording. But if it is a questionnaire that uses this specific paper wording you might need to consider making an update. Typically we suggest updating to “select,” that’s a nice agnostic word that can be used across a range of different platforms, whether it be touch-screen, whether it be paper, and even IVR systems can use “select.” But this again is something we recommend a decision that’s made as soon as possible for a number of reasons. Always important to confirm with the instrument owner any wording changes you’re going to make and make sure that they’re comfortable with you making those changes. But it’s also worth bearing in mind it has an impact on translation. And so if there’s existing translations out there of the paper version of the questionnaire, if you make this wording change, now your translations are not going to be up to date and you might need to make those minor updates across all your translations. It is a minor update, it’s worth highlighting, and it’s a very straightforward for translation agencies to make that change, it’s sometimes referred to as a tick-for-tap update. Typically it only takes a few days, you don’t have to go through that full linguistic validation process. But it’s still an additional impact to your timeline that you need to consider and bear in mind. So it’s worth having that consideration early on whether you're going to make these wording updates. It’s worth highlighting that you don’t have to make this update. This is a nice to have and I would go as far as to say it’s probably preferred. But we've deployed countless studies where we've kept the paper wording, whether because the instrument owner wanted us to keep that wording or because there wasn’t enough time to make the updates, etc. And it never causes issues. Patients understand that if it says “please circle" but they’re provided with these select boxes that in fact they’re being asked to interact with the select button. So it’s not a disaster to have paper wording in there, but it’s worth considering whether you want to make that update as early as possible with your study.
Another key consideration is in regards to the actual layout on the screen. And we’re seeing more and more a move towards deploying questionnaires in a one-item-per-screen layout, no matter what screen size of device we’re using. So obviously on a hand-held device you’re pretty much limited to one item per screen anyway. But we’re also seeing this more and more on tablet devices and on PC implementation, that instrument owners who are particularly mindful of electronic data capture are requesting and in fact requiring they want to implement them as a one-item-per-screen implementation. And this is really driven by the desire to reduce variability between devices. And so obviously going back to that example of hand-held versus tablet, on the hand-held device where you have one item per screen, a large tablet device you could in theory replicate paper almost exactly. And so you could have multiple items on the screen similar way you might have in paper. But that means there is quite a difference visually between your hand-held and your tablet implementation. And that variability can extend between different tablets for example. Obviously there’s a range of different tablet sizes out there that impact the number of questions you might be able to get on a single screen. And particularly as we move more and more into a BYOD world, so having patients using their own devices, which opens up the possibility of having a host of different device sizes within your clinical trial. Reducing that variability between devices is something that’s to be desired, really. We’re, within clinical trials, constantly fighting against variability and attempting to keep it to a minimum as much as possible. And so while it’s certainly not enshrined as a gold standard it’s probably moving towards a best practice certainly, that questionnaires should to a greater or lesser extent be implemented as one item per screen. Again, this is something we see great variability within instrument owners’ requirements, within sponsor requirements. It also depends on the length of the questionnaire for example. You might want to fit more questions on a screen just to try and keep the burden on patients a bit lower so they don’t have to keep moving between screens. But something like the SF-36 which only has 36 questions, one of their requirements is that tablet-based version of the questionnaire are implemented as one item per screen. And again, that’s being driven by this idea of reducing variability between devices. And so it’s very much an open area for discussion at the moment, but I wouldn’t be surprised if we see more and more instrument owners requiring it in the future.
Also worth highlighting that localization is key at this particular point. The length of the same question in different languages can be amazing. You can double the amount of space you might need going from one language to another. And that’s something you really need to be considering up front and building into the systems from the very beginning. Just because your question fits nicely in English on a screen—so for example to use the very stylized example you’re seeing on the slide right here: “Did you have chest discomfort today?” That English translation fits fine on that particular device. But if you try translating that into Russian, for example, you would very rapidly run out of space and not be able to implement it for that particular layout or use that particular font size. And so that’s something that needs to be considered really early on and built into the layout and design from the very beginning to give you enough space for including localization. Typically we design and build the English version of the questionnaire first. But we need to build it in such a way that it allows for the implementation of localization, allows for the implementation of translations, when it comes to localizing the questionnaire, so that you’ve allowed enough space in there, so it doesn’t create an issue, it doesn’t mean sentences are cut off for example, it doesn’t mean things get too cramped. And that’s something your eCOA provider should consider from the very beginning, the localization that might possibly be needed within the study.
A very specific example that you mightn’t see in many questionnaires but is also worth bearing in mind is when there is any kind of graphics or diagrams, tables. Even more interesting things like pictures and videos, which we are starting to see a bit more of as people start to take advantage of the technology and try to see what they can do with the technology to make these questionnaires more engaging but also easier and more informative for patients to use. And these kind of graphic elements or user elements obviously need to be scaled appropriately for the device and the screen size that you’re using. And while a lot of that can be done automatically by the system, you also need to be very comfortable that things will scale correctly across the whole host of systems or devices that you might be using in your study and again I use the example of BYOD. I think for things like pictures this might be particularly pertinent, pictures and drawings, because for example if you’re asking patients to compare something that’s happening to them to a drawing, whether that be a rash for example or another example that I’ve seen is bleeding, so the amount that they might have bled. And you’re showing a diagram as an example, obviously the size of that diagram varies between devices. It might theoretically impact how they respond. So it’s just worth bearing in mind. Again a lot of this can be automatically done. An eCOA vendor should be able to advise you on how best to deal with this. But if you are dealing with a questionnaire that has drawings and diagrams or any of these elements in it, it’s something that needs to be considered how they’re best scaled across the different devices.
So a kind of summary of the steps to take to successfully migrate from paper to electronic, taking into consideration all those things that we’ve covered, those six key considerations. What might be the steps that one should take to ensure one’s migration is as successful as possible. Well the key starting point, contacting the instrument owner, first of all to get permission to use the questionnaire, but also to confirm whether they have any electronic requirements of the questionnaire already. Do they have a set wording, a set layout, etc. for how one is meant to implement these questionnaires. That kind of throws a lot of these design questions out the window if they already have these specific requirements in mind. So step one is to check with the instrument owner whether they have any specifications for electronic implementation. If not, taking a look at the original, normally paper, version of the questionnaire and start having a think about any changes that might need to be implemented that we touched on already, whether that be minor wording changes to the instructions, whether it be considerations for how you’re going to implement it on a one-item-per-screen implementation if that’s what you’re going to do, or if you’re going to have multiple items on a screen, where the break is going to be, how many items you’re going to have on the screen. You need to document in detail those requirements so that you have a good understanding of how the migration is going to happen. Off the back of that, your eCOA vendor can create screenshots and kind of describe the flow of the questionnaire on the device. And this is something where you can also get instrument author input, you can share screenshots with them and get their feedback on the layout, and some instrument authors require this. At this point you also, in parallel, I would be suggesting, you need to be getting your head around localization and linguistic validation requirements, getting an understanding of the countries you might be deploying in and initiating that work as much as possible. If needed, if your questionnaire is going to be supporting a label claim for example or if it’s an instrument author requirement, you need to be conducting cognitive debriefing and usability testing on this initial version of the electronic version of the questionnaire. You need to take a look at those results obviously and see if there is any feedback that you need to address. To be honest, in the huge number of usability and cognitive interviews we've been involved in as a company, we’ve never heard back from patients that they would respond differently to the electronic version of a questionnaire compared to the paper version. I think it’s usually, that makes sense if you think about it. Why would I rate my pain differently because I'm being asked on a screen versus on a piece of paper. But some of the feedback you might hear is things around usability. Font size could do with being a bit bigger. I wasn’t quite sure where I was meant to be clicking on the screen, etc. And these are all things that your eCOA vendor should really be addressing first time, they should know about that font size needs to be appropriately large. As I referenced earlier, you don’t want to be making patients thing, so the eCOA vendor should be designing the system in such a way that you’re not getting that kind of feedback from patients. But you need to take a look at those results and see if there is anything in there that you might want to change and develop any recommendations for change off the back of that feedback, and obviously share that with the instrument owner, discuss it with them if there’s anything significant. I’d make those changes as needed. Again it's rare that there is changes that need to be made, but if you are getting feedback from patients, that needs to be taken quite seriously. And then obviously you get to deploy your lovely usable electronic version of the questionnaire in your clinical trial.
And so some of the key takeaways I hope to leave you with from this presentation is really that we have unique capabilities of eCOA technology. And, you know, it's something somebody might have heard me say before, but I do get frustrated with the fact that we tend to treat electronic data capture as just expensive paper, and we really to a greater or lesser extent are replicating what’s been done on paper, when there is all these other things that we could be doing to really take full advantage of the technology to be capturing outcomes data in new and innovative ways that make things easier for the patient as well as giving a better insights into the patient experience. So really taking advantage of that technology as much as possible to allow you to create these good quality implementations.
Obviously there are certain limitations in place when you are migrating from paper to electronic, but it does allow you to improve your data quality and ultimately improve the study outcomes and improve the experience of the patient within the clinical trial. And the key takeaway message which is no way new and applies to so many things in clinical trials and life in general is that careful planning really makes things so much easier, so knowing exactly what you want to do as early as possible and planning that migration process, particularly--and I will highlight this—if there is instrument owner involvement, getting them on board and getting an understanding of their requirements and timelines as soon as possible can save you heartache down the road because they can have a significant impact and a significant say on what you can and can’t do with their questionnaire. And so getting that understanding and ensuring you are clear about what needs to be done before you can deploy is vitally important. Save you time and money but more importantly really is ensuring the patient is providing good quality data, regardless of the mode of data collection that's being used.
And thank you kindly. On that note, I’ll hand back to Jackie.
Thanks you, Paul. I’d also like to just real quickly let you know that we do have a resource hub. You can see the link there on the screen, where we have links to other webinar recordings that we’ve done and ebooks and short videos that we have access to. So please feel free to go onto the resource hub and check that out. And at this time I will hand it back over to Paul. It looks like we have a couple of questions that have come in, Paul, so I will leave it with you.