Centralizing Project Management and User Acceptance Testing (UAT) with eCOA to Reduce Risks - Preparing for ICH E6 R2

June 14, 2017

FULL TRANSCRIPT

Centralizing Project Management and UAT with eCOA

Speaker: Gauri Nagrani, Chiltern

MODERATOR

I’m going to introduce Gauri. Much travelled, delightful dinner companion last night. Grew up in Kuwait and Dubai, is that right? Gauri is Associate Director at Chiltern, she has increasingly senior roles in clinical data management, leads the eClinical practice with particular reference to IRT and eCOA, is that right? And Spotfire, that strange other piece of technology. All right, so Gauri is going to talk to us about centralizing project management and user acceptance with eCOA to reduce risks and also preparing for ICH E6 and R2. Gauri.

GAURI NAGRANI

All right. Good morning everyone, can you guys hear me okay in the back? Okay.

So firstly before I get started, I’d like to thank CRF Health for providing me an opportunity to speak to everyone today. Yesterday we had some fantastic presentations which we touched upon a lot of important elements, so patient safety, data integrity, and really overall how we can make an impact, or a meaningful impact, to patients’ lives. So today I’m going to bring a lot of those items back to the forefront.

So today I’m going to speak to you about centralizing the project management and user acceptance testing—so I’ll refer to that as UAT—with eCOA to reduce risks, and are we prepared for the ICH E6(R2) addendum which was released last year. So besides the yoga and the other good stuff there, our secret sauce within Chiltern has been to have a centralized group, so eCOA SMEs as well as an eCOA UAT, which again is centralized, which I’ll talk to you about during this presentation.

All right, so some food for thought. I’ll talk to you about the ICH E6(R2), the addendum, just really at the high level, and the eCOA impact it has as we see it within our organization. We’ll talk through some of the general challenges around process, people, and technology. And then really looking at the operational level. We have the decentralized versus the centralized model, so we’ll talk through what does that mean and just really comparing and contrasting some of the benefits and the challenges there. And then lastly a sharer use case and some best practices.

So if we take a look at the ICH E6(R2) addendum and the eCOA impact, I think a lot of you guys are aware of this addendum that was released last year, but if not no worries. Out of the 26 changes, we can classify those under two buckets. So on the lefthand side the general updates, and then on the lefthand side the sponsor responsibilities. And this is how it really impacts eCOA. And again it’s how we view it within Chiltern. So on the lefthand side, the general updates, the first piece is a very hot popular topic, is the computer system validation—so I’ll refer to that as CSV. So the regulators have told us we need to take a risk-based approach. Some of the factors to consider are subject impact and the reliability of the data. Additionally we need to look at the system documentation, do we have everything in place, for example the SOPs, and are we doing what we should be doing. From a sponsor responsibility perspective, the biggest piece of the new piece there, section 5.0 within the ICH E6(R2), so the quality management, again, no surprise. There seems to a common theme of risk-based approach, and I think Anders spoke to this yesterday too, the risk-based monitoring piece. And then one additional point is the oversight of the vendors. So for example, just making sure as a sponsor you’re overseeing the vendors that you work with during the clinical trials. So just wanted to highlight here and summarize, a common theme here is the risk-based aspect.

[05:12]

From a process standpoint, if we look at the risk-based methodology, we have six different sub-steps, so really the end-to-end process. We start off with the risk identification. So under that, there are two considerations. So the system level. At the system level, do we have all the necessary SOPs, computer system validation versus user acceptance testing. At the clinical trial level, what does our design look like, whether it’s a site pad, lock pad, what’s the design within those devices, and the data collection, where are we collecting that data, are we collecting within eCOA, XRS, EDC. How can we be smart about the data collection.

The second piece is the risk evaluation, and some of you might be familiar with this. Similar approach to the GAMP. So looking at the likelihood, the detectability, and the impact of errors and how that impacts our clinical trials and our processes, and in particular the eCOA system.

The third is risk control. So under that, we have the risk reduction and the acceptance. So when we’re thinking about designing our site pad or lock pad, how do we again be smart about the design, how much risk are we willing to take. Again, that will depend on your organization’s risk appetite, how much UAT we want to do versus validation.

Fourth is the risk communication. So one aspect there is the change controls. So by change control I mean if there’s a protocol amendment or a system enhancement, how do we make sure that we’ve assessed the risk. We’ve assessed the risk and then we’re communicating that risk effectively to all of the front-end users, all the necessary stakeholders. And then the other key point there is really assessing the impact, so are there other impacts to other systems. So if there is a change in the eCOA system how does that impact EDC, IXRS, and other data visualization tools there.

The fifth item is post-production issues. So I think what’s really important there and interesting is that if we can look at any post production issues that might arise, collect that data, and that kind of feeds back into number one. So if we’re seeing an issue with the web reports, the calculations for eligibility for example, is that high risk. What’s the impact to our subjects. Should we be looking at designing the device in a smart more effective manner. How much UAT need to be done. So it’s kind of that feedback loop that goes back into number one.

And then last but not least is the risk reporting piece. So number six there, what we’ve done is come up with KPIs, so key performance indicators. And then some other metrics, so just having existing SLAs with some of our vendors, and then just to bring back the risk-based monitoring piece too could potentially have reports there to assess the risk from that perspective.

So now we have somewhat of a process in place, a risk-based process. However, we talked about the process, the people, and the technology. So checkbox on the process, and now the people and the technology aspect of things. So I think many of you guys in this room have worked with multiple vendors, multiple eCOA vendors. And as we see within this dynamic and evolving landscape, managing different vendor capabilities, their processes, managing continuous new capability from vendors, we’ve also seen different data collection modes, so web, voice, a dedicated provisioned device, BYOD. So those are the sort of challenges I think a number of you guys, including us, face from a technology standpoint.

[10:10]

On the flip side, on the people standpoint, we have—within your organizations you have many study teams, many different therapeutic areas, the teams might be internal or through service providers. Again, the teams can vary from one study to another. And I think the most important piece here is the study teams bringing varying degrees of expertise to the table.

So what do we do? We understand the risks, we have the challenges from a people, process, and technology standpoint.

So I’m going to talk to you about two different models. The first is the decentralized model, which I think just based on some of the conversations that I’ve had, that this model exists within your organization. And if it works for you, that’s great. So just looking at the pros and cons of this model.

If we just focus in on the righthand side there, the study team typically would comprise the biostatistician, data manager, ClinOps, and potentially clinical supplies, and that becomes a little bit more relevant with the IRT aspect. So some of the benefits there are—we’re going to kind of hone in to the bolded text here—so one benefit there is the study team training. So having the study team, you’re involving all of the front-end users, so you provide that training while they’re doing the user acceptance testing and then if there are any questions post go-live, they’re able to address that because they’re very familiar with the device and the design. From a team perspective, the challenges there are the availability may be limited. And I think all of you guys know this more than I do. The data managers, during study startup, you guys have a lot of activities you’re working on. Clin Ops same thing. So, you know, the availability may be limited there. From a technology standpoint, the technical knowledge, topically we’ve seen that the data managers tend to be more technically savvy versus ClinOps. And then really just as an overall, the ownership and the RACI seems to be unclear or not clearly defined. From a process standpoint, we’ve seen inconsistent requirement documentation, inconsistent testing documentation, and then reporting and findings. And really, one of the attributes there is that we’re bringing in different study teams and varying levels of expertise.

And then, last but not least, our favourite topic there is the risks. So potentially the quality of testing due to corner cutting—and this is again just, you know, potential—all of the data managers, ClinOps again, are working on several different activities and so they now have to fit in the UAT as part of their schedule. And then just the insufficient testing coverage, again, which they might not have all the time that they might need.

So just kind of at a high glance with the decentralized model, what we’re seeing is a lot more challenges than the benefits.

All right, so on the flip side, the centralized model. So again, if I can ask you to just focus in on the righthand side there within  that model. So we have the study teams, we have different study teams, one two three four there. Within the study team exist your data manager, ClinOps, all of the study management representatives. Now on top of that you have an overlaying layer. So in the center is the centralized PM and the testing team. So what they do, and what we found beneficial, is really bridging the gap between what do I want and how do I get there, from a technology standpoint. So that’s really where we found the benefit of that centralized team and having them collaborating and working together towards this common goal of delivering a quality system.

[15:07]

So some of the benefits that we see with this model is the improved quality, because again they’re bringing a lot of expertise to the table, they’ve worked with different vendors, and you know, the most important thing is that they’re able to proactively address issues, trouble shoot, etc. I guess, good for the data managers and the ClinOps folks here, is really by that centralized team stepping in is helping the ClinOps and the data managers reduce some of their burden there. But an important point there is just to make sure that we are still engaging the front-end users. So we’re not saying hey, you know, data managers, ClinOps, we’re going to just take care of everything and you guys just be on the side. Because they are ultimately the front-end users, so we’re trying to bridge the gap but we still need to make sure that we involve them. They are accountable on the test plan when we create that, they’ve got to sign off that this is what we’re testing, and just really keeping them engaged throughout the process.

We’re also seeing, as you can imagine, a dedicated team, so they have a little more time on their hands, improved communication and oversight. We’ve seen improved metric tracking because there’s improved standardization across the board with documentation requirements, findings, etc. One of the biggest advantages is the lessons learned between the vendors, between different sponsors, studies, that that centralized team brings to the table.

And then on the con side, so on the challenges, some of the challenges that we see is potentially longer UAT duration. And that could be due to robust testing, which actually if we spin it in a positive way it’s a good thing to make sure that the quality of the system we’re delivering his high. So this is a model that we have implemented within our organization and we’ve found that A, it has been efficient and B, it has been effective for us.

All right, so this is a use case that I’ll be speaking to you about. If we look at the sample size, the sample size is n=250. The time period is five years. So 250 eCOA studies, this includes initial builds as well as any amendments or change controls to the eCOA system. So if we focus in on the graph here, on the x-axis we have quality from low to high, and then on the y-axis the ROI so really the bang for your buck. Within the decentralized model, what we’re seeing is the quality and the ROI kind of fits in the quadrant of low quality and low ROI. And as we move to the centralized model, we’re seeing that it moves to the upper right quadrant where our quality is improving as well as our ROI, which is a win-win for all of us.

We also looked at some other measures. So I spoke about KPIs, some of the KPIs that we have in place making sure that we’re meeting the UAT timelines without impacting the go-live data. And then lastly, one of the KPIs, which again is a really important KPI, is the critical production issues, which result due to insufficient testing or not robust testing. So the model we have is a risk-based testing approach and actually, proud to say that we actually have on a lot of our studies the critical production issues at zero. So again, I just wanted to make the point there, it is critical production issues, so not medium-minor and it’s how you categorize what critical is. But in our definition, critical is basically impacting subject safety and the data integrity.

[19:55]

So just to kind of tie in everything here. From a best practices standpoint and just to hone in on the testing piece, what we found really beneficial is to invite the test lead and the technical PM to the kickoff meetings, get them in as soon as possible so they’re aware of any protocol nuances, design, etc. Involve them again at the requirements gathering meetings. Those are really important meetings to make sure if you’re asking for a shiny red car you’re getting that shiny red car at the end. The third, which we spoke a lot about is embracing risk management as part of your testing approach. Promoting testing transparency, so again involving all of the stakeholders, making them aware of what you’re testing, what’s in scope, what’s out of scope, so that everybody’s clear and if there are any issues that come about post-production, everybody’s signed off on that. And then last but not least, which is pretty obvious, is promoting study teamwork, so working with everybody in a collaborative fashion. 

And then lastly, just thought I’d add some humour here. We need to squash the bug. So making sure that before it grows—at every stage the bug is gonna grow—so make sure that we’re asking questions up front, from a design perspective, requirements, etc. and not leaving that to the end at UAT. So an important point there too is getting your team heavily engaged and involved in a prototype meeting, so that they understand this is what they asked for and this is what they’re going to get.

And just to conclude, if any of you guys have read the book Jim Collins’s From Good to Great, how can we get to that stage, the great stage. We’re at the good stage, let’s get better. So make sure we’re placing the right people in the right seats and most importantly at the right time.

This is again another hot topic, so making sure we’re performing user acceptance testing, not validation. Again it really depends on the terminology and the understanding that you have within your organization, but we need to ensure. So a lot of times the vendors will perform the validation, and when we’re doing user acceptance testing we need to make sure that we’re not re-validating the system.

And then last but not least, for us what’s worked, again, is utilizing a centralized model. It has allowed us to be efficient and effective to reduce the risks and increase the quality regardless of the technology provider that we use.

All right, and that’s it.

MODERATOR

Thank you very much for that. It’s clear that you and your team have put a huge amount of investment into this to plan for scale and effectively give us vendors an advance warning system. If things are going wrong you’re going to give us a little bit of a squash. We don’t want to be that bug that causes a squashing effect.

So a real takeaway for me is that these guys are clearly planning for the future, they’re clearly planning for scale, putting a model in place that will accommodate that growth. So really interesting concepts, thank you very very much.

Could I present it out to the audience to have some questions please.

[Q&A section begins at 24:08]

AUDIENCE MEMBER

Hello. We had a discussion yesterday, so you know that this will come. So performance UAT and not validation, that is something that makes me really really excited, because in my world I’ve been working with computer system validation for a long time and I say that ePRO or handheld devices, whatever, are also part of that. Now, what is the definition of validation? Validation is that you make sure that the system or software that you are developing is functioning or performing according to your user requirements, specifications, correct. So that is validation. What is UAT? What is the intention when you are performing your UAT?

[25:05]

GAURI NAGRANI

So the UAT, and again you know, it goes back to the terminology difference within your organizations, but basically the UAT is doing testing, user acceptance testing, to give you that confidence level that what’s been programmed matches that of your specs.

AUDIENCE MEMBER

Exactly. So the UAT is performed by the customer, by the sponsor, and the developer is performing some kind up there. They’re not calling it user acceptance testing, because that is my responsibility. On the other hand, they are doing exactly the same because they cannot release a system or a software to be tested by the sponsor if they have not tested it itself. Well that is what we are expecting, we are not always getting it, but that is what we’re expecting, right. So I’m just thinking, I mean in my world, UAT is a part of the whole validation process. And that’s why I’m getting so excited. That’s also because I love this, I mean, I’m really burning for this and it must be right.

GAURI NAGRANI

Yeah, and I think it’s a matter of semantics. I can see where you’re coming from. I think when we distinguish between validation and again UAT, so the vendors have done their validation. The extent and the depth from a technical standpoint has been covered. Now when you’re doing user acceptance testing, the question, or a thought that comes to our mind is, are we having to do the same level as the vendor has done already, or are we looking at covering high level scenarios that are clinically relevant, that are clinically relevant that are going to occur in production potentially. Because that’s what we’re trying to do is mimic issues or things that might happen or the flow during production. So again, I think it’s a matter of the semantic piece there, but the UAT to us has been more of high level versus the deep dive validation that occurs at the vendor level.

AUDIENCE MEMBER

So just one last thing. So I can agree on the semantic part, but we should keep in mind that the validation activity or execution is not ready when the customer has performed a UAT, because the validation process ends when I say that I’m accepting the system to be set into production. But then the developer actually needs to push either the software or release it as such, because I don't have the power, I can only accept it written.

GAURI NAGRANI

Yeah, and there’s a really good article by the ISPOR Task Force on ePRO validation. So I highly encourage everybody who wants to take a look at that article.

AUDIENCE MEMBER

And maybe just add, from the CRF point of view, I think kind of as a non-technical person within CRF, the way I envision it is, we validate our platform, and then the user acceptance testing is done on the bits that we build on top of that for a specific study.

AUDIENCE MEMBER

Exactly, I fully agree with what was said here, because we suffer the same, you know. Validation I would expect from the vendor side, you know, because you have the database, the programs, the mechanism there, what we also clarified with an audit up front, you know, with a vendor audit. So on the customer side, I would see that we do the user acceptance testing. And we involve also like the monitors or the CRA who is at the site, because okay the UAT is then two hours longer, yeah, but we get nice feedback because these are the people who really use this. If I do a user acceptance testing, you know, I’m dealing every day with different systems, I do not see its acceptability, you know, because I know how to get this, I know how to further proceed, things like that. So we involve really the clinical team even outside, you know, if we get someone from site even better, you know. So I agree, but because we are doing it internally so it’s another different option. But it is a long UAT, but then we can be sure that it is really accepted.

[30:00]

And the second question is, who is writing the test script for you? Because you can buy it in the meanwhile from the vendor, but you get, I think, their validation menu, like 200 pages or 500, depends on the program. If we do it, you know, it’s not my favourite job. So normally you take the user menu, I usually cover each and every thing, we are not so deep in every single protocol. So who is writing the user acceptance script at your place?

GAURI NAGRANI

Yeah that’s a good question. So actually our centralized UAT testing team, they will write the scripts. So they get involved right from the get-go, so the test plan, writing the scripts, executing, findings log, test summary report, the whole piece right to the end to say, hey, we’re good to go and we’re good to release this system. So just from that perspective we’ve actually, to Paul’s point, looked at from a risk-based perspective is what’s core, what’s standard, what’s non-standard or customizable. We have standard libraries in place so we can just kind of pull those libraries and tweak them as necessary. So we actually do have a centralized team that does all of the things that a lot of people don’t like doing.

AUDIENCE MEMBER

I have a question more from a user perspective, and I was intrigued by the slide that you showed about quality and ROI. The ROI, how do you define it?

GAURI NAGRANI

Yeah, so the ROI is basically the head count. So basically having that person working on several different studies, so we’re actually getting our bang for the buck when we have that centralized model, because they’re able to work on different studies and they’re more efficient and effective.

AUDIENCE MEMBER

So there’s real hard data that shows that the centralized model at the end of the day saves money?

GAURI NAGRANI

Right.

AUDIENCE MEMBER

Thank you. From an organizational aspect, which I find really intriguing as well or interesting and that’s that you always, I really very much follow the centralized model, things like, you know, getting the people in the right seat at the right time. One of the benefits of having a centralized group of course as well is that you see a lot of it, you see all of it across these different studies, and I think you mentioned 250 studies for different study teams. So how much and how much discussion or debate is it from your team, with all the experience you’re getting, into the design stage, because as you mentioned, the good idea is to bring in the test lead in to some of the kickoffs, but it seems like the design might be very much set, which means that the requirements are set for your team, and I can kind of foresee you getting her design, getting the requirements and saying, we’ve seen this and actually we have some good advice and input here. So how does that work organizationally, is that going to accept it, or you’re just expected to come in later and do the testing or how does that collaboration work?

GAURI NAGRANI

Yeah so that’s a good question. So we actually have—if we distinguish the two roles—we have the eCOA SME and then we have the eCOA UAT lead, for example. So the eCOA SME will be involved right from the get-go. So they have the protocol, they’re aware of the protocol, and they get involved right at the first requirements gathering meeting. So that way it’s not too late. And you know, the requirements are not set in stone. And they’ve seen different designs, and so they’re able to proactively bring in solutions. And then the UAT lead, again will get involved towards sort of the mid stages and not the initial stages of the requirements gathering, so when it’s sort of draft stage, draft final, that’s when the UAT lead gets involved.

AUDIENCE MEMBER

And I think this actually talks to CRF in particular, our philosophy to building a study, it’s a very collaborative approach. And with our design tool, basically we’re almost doing UAT the whole time as we’re going along with the sponsor, because we’re able to demonstrate the screens, work through the flow, and change things literally there as we’re sitting in the design meeting. So we almost frame the UAT as this should be zero surprises in the UAT because you’ve seen all of this already, and it’s just making sure now that everything flows as it’s meant to. Any other questions?

AUDIENCE MEMBER

Just out of interest, how many of your studies did use standard questionnaires and how many were created by Chiltern?

[35:00]

GAURI NAGRANI

So we don’t create questionnaires. It’s typically created at the sponsor level and then we get involved in the requirements. But just to give you an idea, we’ve seen varied customized questionnaires, not very many from an assessment or an instrument perspective, not as many as we see on a diary.

AUDIENCE MEMBER

Very interesting. Just out of my curiosity, because you seem like you are building up an expert of UAT in the group, right. That’s very good. And how are you going to motivate them if they have to do UAT day to day?

GAURI NAGRANI

Yeah, so it’s an interesting question, and I think, kind of, given the introduction, you see that I oversee the IRT group, eCOA, Spotfire, and so how we motivate them is working with different vendors, different systems, so they’re learning the different systems, different domains, and that’s how we motivate them. And actually, to my surprise initially a couple of years ago, some people just like testing, they love testing day in and day out. So you know, two factors there. So hopefully that helps answer your question. We have a lot of people who want to hack the system.

AUDIENCE MEMBER

I’m just curious, seeing as how you’ve been involved in this for a relatively significant chunk of as long as eCOA has existed, have you seen things get better over time, have you see quality improving, have you seen, as people get more familiar in all walks of life within the industry, have you just seen things get easier in regards to UAT?

GAURI NAGRANI

So I think, with eCOA what we’re finding is, given the complexity of the design, and also the timelines that we’re faced with—I think all of you guys are faced with that pressure—I don’t necessarily feel like we’re getting better, just to be honest. Yeah, I just think— so when we talk about—and if I can distinguish from a device perspective—but when we talk about the web perspective I can see improvements there.

MODERATOR

Any further questions at all? No? Okay. Thank you once again.

[END AT 38:10]

 

Previous
Discussion Panel: eCOA Best Practices
Discussion Panel: eCOA Best Practices

Next
Best Practices: Using Connected Devices in Clinical Trials
Best Practices: Using Connected Devices in Clinical Trials