I’m delighted to invite up to the stage now Hayley Jeffery, who is the Helpdesk Manager at CRF Health, and Tunde Nagy, who is the Senior Operation Delivery Manager at Stefanini. And they’re going to be talking to us today about something that kind of can be overlooked in the discussions around validation and UAT and hardware failure and all these good things, but in fact is a key aspect of a successful eCOA study, and that’s the Helpdesk. And they’re going to discuss some of the learnings they’ve developed over the years in regards to what are important metrics for Helpdesk systems.
So Hayley has joined the company relatively recently and kind of revolutionized our Helpdesk team, has done some amazing work getting our Helpdesk into a world class situation. And Tunde and Stefanini have been very close partners of ours over the years, so it’s great to have them here presenting with us.
So without further ado, Hayley and Tunde, thank you so much.
Good afternoon, can everyone hear me now? Okay good, that’s a good start. Thanks Paul, thank you for your kind words. Thank you everyone for joining us today. So as Paul said, we wanted to share some of our knowledge around the Helpdesk and some of the key metrics that we are monitoring and recording. So this is a small example. And then Tunde is going to run through some of the key categories and issues that we receive and how that sort of impacts sites and patients. So as I just said, our agenda today is to sort of discuss some of the key metrics, and then our request areas, and most importantly patient support through eCOA.
So we do log a lot of different metrics, but today we’re just going to focus on sort of the top four. Top one being customer satisfaction, so making sure that our sites and patients feel supported throughout the trial and making sure that the Helpdesk is providing that support. First call resolution, so making sure that when we receive calls we’re answering them and handling the issue within that first call. Average speed of answer, so making sure that when the call comes in, we are handling it quickly so that people are not on hold for long periods of time. And lastly, the Tier 1 and Tier 2 efficiency. So we have a Tier 1 that handle the initial call, and anything that’s escalated goes to Tier 2. So we also review the metrics around that and make sure that we are handling everything we can at Tier 1 level. Again, that falls back on customer satisfaction. So we’ll just dive deeper into each one.
So under customer satisfaction, we actually utilize a survey for all inbound calls. And that gives the sites and patients the options to leave some feedback on how that call was handled. So it can be done in many ways—net promoter score, email survey, etc. This is the way that we’re currently doing it. And it’s really important to capture that information so that if we do receive any negative feedback, the quality team can review it, check to see whether it actually relates to the agent who handled the call or whether perhaps there’s some negativity or frustration from the site or the patient around the devices they’re using. And so, any calls that are negative can then be followed up with the sites. That’s something that we currently do. Our Tier 2 will reach out to our delivery team to follow up on those. And it has helped. It helps alleviate some of the frustrations that sites have, because at the end of the day, our unhappy customers are our greatest source of learning, and if someone’s unhappy with the Helpdesk, then we need to follow up and make sure that’s resolved.
First call resolved. So as I was saying, resolving the issues within the first call is vital to keep everyone happy. It ensures quicker resolution time, it gives us quality metrics, and it really just instills some trust and confidence in our sites and patients that the Helpdesk know what they’re talking about. So we aim for a 90% first call resolved for any issues that are in scope. And when we talk about the Tier 1 and Tier 2 efficiency in a minute you’ll see that we obviously can’t have everything in scope for Tier 1, there are some things that have to be escalated to Tier 2. But anything that could be handled by Tier 1, we try and aim for 90%, and for the last sort of six to eight months we are achieving that. And that’s really important because, as I said, sites and patients want to have the answer there and then, they don’t want to have to phone back another time, they don't want to have to wait days for answers, they want the answer there and then. And they’re busy people, they need to get the answer, get the resolution, and go. And we also utilize additional tools, so we have an internal web chat that we use between Tier 1 and Tier 2. So if Tier 1 can’t provide the answer, they’ve got someone to reach out to who can get them that answer, and that helps us to maintain that 90% FCR.
So average speed of answer and abandonment rate are also very important. As I said, we don’t want sites and patients having to wait ages for their call to be answered. So we utilize or Follow the Sun support model so that we ensure that we have the right number of agents available to handle the calls in the right languages. And we track this just to make sure that we are sticking within a certain average speed of answer. So we are previously working to 30 seconds, and we’ve revised that to 25 seconds. And again, we are managing to maintain that. But it does make sure, especially if there is like a peak hour or a peak language, that we are covering that, so that people aren’t on hold and dropping off.
Same with emails. So emails is actually sort of a secondary contact for the Helpdesk. We tend to receive more of the issues over the phone, typically because they’re important, they need to be resolved quickly. But we do receive some sort of logistic request and requests for accounts via email. And although they may not seem as important, we still need to treat them as such. So our email response time that we aim for is 100% within three hours. We always meet that. We try and aim for the 95% within one hour, and we tend to meet that too. And that just ensures that the sites again are not waiting long times for emails to be responded to.
And then Tier 1 and Tier 2 efficiency. So as I said, we do tend to have a lot of answers done at the Tier 1 level. However there are certain things like logistic requests, account requests, and some issues that Tier 1 can’t always answer. So they are escalated to our Tier 2. Approximately 75-85% are resolved at Tier 1, and then those that are escalated up to Tier 2, it’s a very small percentage, it’s only around 4% that actually relate to technical issues that Tier 1 couldn’t resolve. The remaining percentage is actually requests for the logistics and accounts that have to be approved at a study or delivery team level. And as such, they have to be escalated to Tier 2. And having that 80% is key, and tracking it is key, because we don’t really want sites and patients to have to wait a long time for answers. And typically when something is escalated to Tier 2 it will take slightly longer. So we do watch that, and it is a key metric that we actually do report back to show that we are within that. The green line at the top shows how many have been escalated to Tier 2, but the actual red line shows the exclusion of the trial manager and logistics, so it’s actually below 5%.
I will now hand over to Tunde, regarding our requests and incidents.
Thank you Hayley. Just before diving in and starting off with the upcoming slides, I would like to—I’m sure that the upcoming slides are looking at different categories, either for incidents or requests, it will help us better understand what kind of requests and incidents we’re receiving, what is maybe within the benchmark for this area and this market, where are the areas where we could maybe look for further improvements, what are the items where we can have better support or improvement and look for the upcoming level. It’s very important to understand that by the Helpdesk definition, we know that the support is an integration between the study teams, between the patient and the upper level support, so everything is normal to see in the upcoming slide. Different categories like account creations, like some of the study-related questions, because everyone is referred and related to contact one central number that is offering them support in their languages, on the preferred channel.
Today we have seen, as Paul mentioned at the beginning at the intro, indeed the last year, it was a very dynamic one. We have been going through a lot of increase, volume increase, ramp up, and transformation processes as well that helped us not only to define what is a usual or normal trend line, but prepare us to look more towards the future and define what should be or what would look the forecast for the upcoming period, learn something from some of the processes and some of the study launches that we had, look at this as something good that happened, there is a lesson that we can learn from there and try to engage. Of course with Hayley’s support, and working closely with the study teams, what to do better, or highlight what are the risks that might drive us into a position when we can capture, or at the end of a call we can capture negative voice from our customers. And try to think more and work towards a more proactive approach and try to overcome some of the concerns on the areas where we are able to. And have a, let’s say, lower timeline in the mediation process, because indeed, I tend to relate to the affirmation that Paul made at the beginning indeed that it tends to be at the last item or worry for anyone, because I know that for a ramp-up process, we might be running late a bit with the setup and then we might not have everything ready and in some of the cases, depending on the complexity, this might be noticed, or we might have the time to recuperate, depending on how dynamic is the startup. But there are some risks as well.
So in today’s sessions, practically, what we see in this slide, we have 2-3 main categories. We have a coverage of 70% over the type of requests and incidents that we are receiving, and the eCOA support. And I must say that it is, let’s say, within the normal benchmarking. So in all the cases, and in all the markets, we are receiving TrialManager and account management requests. These are the normal cycle. In some of the cases, and this is here we can see a scalability for the last 13 months, so it’s showing a healthy trend line. And of course the percentages can vary from one month to another, depending, because the pattern is closely following the business dynamics, depending. If we have larger studies going up and ramping up, then we might see a higher percentage on the TrialManager requests, creating new accounts and so on. That can closely follow by the study related questions. If not everything was clear maybe for the sites, they have some issues, they may have some questions that they don’t know very clearly. The technical part is coming into the piece when they are encountering some challenges in the data sending, and in the upcoming slide we will take each one category, so we can have some examples to deep dive into what type of a request we have under the TrialManager questions as well.
In this slide we see the dropdown under the TrialManager, where of course the account management is the biggest piece, covering 77% of the total volumes. And as mentioned earlier, this is mainly covering the pattern of the business growth and dynamics. There is another important or let’s say number two category, but we are looking at let’s say ten times smaller volumes discrepancy between the diary and TrialManager. There are isolated cases when this can happen, and usually the incidents are directed by the patients or the study teams. I didn’t mention in the previous slide, in terms of if we are about to make a split, what is the average percentage of the study team reaching out to our Helpdesk or the patients. Today we are in the range of 60% of the study team and 40% of the patients. The majority of the requests that we are receiving from the patients, definitely the TrialManager is something that we’re receiving from our study teams, yeah. The patients won’t come with new account creations. This is one of the categories that let’s say by the process and the current setup are the tickets escalated to the Tier 2 support team and working closely with Hayley and her team to have them up and running.
While there are some other, let’s say the third significant category that we would like to mention here are the how-to questions. Some of the questions coming in from the sites, or this is usually coming from the new start site staff. Maybe they don’t know where to access if, let’s say the account creation request has been approved, and they have some how-to questions, where the Helpdesk is following the procedures in place and handling and, let’s say guiding through how to understand the data or where to find some of the links and how to follow and set the expectations in terms of timelines and process alignment.
Moving to the number two support type of incidents, I would say, because under technical we won’t really have a request. This is of course happening, and the main category that we are seeing here is happening under the diary. This is capturing the biggest volumes, and is mainly looking at frozen devices or lost application. There are let’s say some technical challenges that they might face and where they need to reach out to someone to offer remote assistance, guidance, either guide them back to the manual or guide them through the troubleshooting steps, step by step so they can have everything sorted out and they can move to the next phase of completing the table.
These are usually, let’s say, incidents reported directly by the patients. In some of the cases they might go and they might be at the site and calling up from there. But in majority of the cases, so we have patients calling in directly from their homes.
Then the second category is the Trial Slate. This is more on the account management and logistic part, where we are discussing and referring mainly to the locked accounts or instances where they need just replacements. And then we have the online updates. And the online updates, we have noticed in the last few years a slight increase, and I believe that this is something that is going to increase, looking in the trend line and the dynamics that we have seen from the beginning of the last year, and hearing a bit from the earlier discussions, I believe that this, with the digitalization, this might be a new tendency that let’s say we aim to follow so we can offer a bigger scalability to our patients as well, and support.
An important aspect is that under the technical incidents, in average 92% of the cases are resolved at the first level without escalation. This is mainly because we have a very good and very well documented knowledge base, processes in place, and looking back at the dynamics, this was one of the items that we have triggered and worked intensively with Hayley, is how to look at the larger categories and how to improve the timeliness of the resolution that we are offering to our patients, because definitely the previous slides where Hayley was mentioning and showing, what is the percentage or what is the result of the customer satisfaction result, we know by definition the main driver points are the timeliness and how efficient was the resolution that we were able to provide. They don’t know and they don’t need to know behind the scene, and behind the processes who might need to get involved. They are looking just that their request, incident, or problem, and how quickly we are responding, and how quickly can they resume the business as usual on their side.
Moving to the last category under the support types, this is referring to the study related questions. This is one of the categories that we are looking at, we can see different proportions when we are looking at the percentages. I wouldn’t like to say that we had some challenges here, but definitely this is an area where let’s say we had some improvement points during the last year and this is still an area where we can still further improve, working together with the study teams and looking at our processes. Because in many of the cases where they have questions on how to fill in the diaries, definitely these are things that we can carry out and cover during the initial training, or maybe we can integrate another or an additional staff being there, a ramp-up period in case they need any additional help to have a proactive approach or a proactive reaching out to them.
Here we have included some of the examples, what type of questions we receive under the protocol visit questions. These are more the protocol visit questions, these are questions that we mainly receive from the study teams, not necessarily from the patients. While filling in the diary or PIN code requests, these two categories are remaining in majority, tackled and raised by the patients.
Here we have let’s say the top three patient support request incident size that we have covered in the previous slide. And if we look into the four let’s say pillars that can further support and can help us to further drive an improvement that let’s say can be directly measured, on their voice that is positive constructive or we can take their feedback to work towards a more positive approach or close in some of the gaps that we might have, here we have the tailored support for the patient needs. The patient experience, and here we are pretty much detailing the current scalability and availability of our Helpdesk, and as part of our discussions and review, either monthly or quarterly that we have with the business, is to look into different increases or different spikes in different countries, regions, so we can see how we can maybe overcome their needs in scaling up our team, better understanding their needs, so we can come proactively and close out some of the gaps that we might have. Then we have the data privacy, that is of course an ongoing item. And in terms of focus towards the quality, and always looking to how we can add more value in the support that we are doing.
And I believe that that would be all.
Thank you so much. Three-hour response time to emails, who’s hitting that metric? I think that was really, like I said at the beginning, a really fascinating insight into an area that we don’t necessarily think about all that much when we’re running our eCOA studies. We’re just looking at the data at the end very often. But it’s a piece that kind of underpins everything we do and really helps ensure we achieve quality studies.
So can I open it up to the floor for any questions people might have?
First of all just a reference back to yesterday. If anyone occasionally would remember anything from my presentation yesterday, Helpdesk is of course also a very good source for signals and indications of something being a problem somewhere. So I encourage you to use that source as well as a reference for that. I think this was great. Again I think it of course is highly relevant and important , the Helpdesk. Can you, for us that don’t work with this every day, you talk a lot of percentages and average times and so forth. Just to give us an idea of what you’re working with, how many calls do you handle per month and how many ongoing studies are you handling in parallel with different designs and different countries and all that?
Well today our support is at the scale of almost 400,000 patients in support actively, or in a range of 3050 active studies. The dynamic has increased quite a lot during the last years, so in the last two years, our volumes just simply doubled. In terms of volume in a monthly basis, we are speaking about in a range of between 4000-5000 calls, and in total average volumes, including the emails and different sources, types of tickets, we are in the range of 9,000.
That’s a bit. Thank you. And the second question, in terms of distribution, some very interesting slides on the distribution of the type of calls and type of issues people are calling about, if I didn’t miss it, do you have any—how is the split between patients calling, CRAs calling, and sites calling?
The difference or the split between the patients and the study teams is 40% patients, 60% study teams. And let’s say the split between the CRAs or different site staff can vary from—it can follow very closely the dynamics. If we have a new study launches, that can vary from one month to another.
Any other questions?
Yeah thanks, this might follow on a bit from Anders’s question, but all these metrics were I guess overall you know like 98% of the stuff. So if we’ve got an ongoing study that say has been run for six months, do you run this off and compare that study to the general metrics and then provide feedback? For example, if like it’s like 4% of patients are technical, if a study has 20% of technical issues, I guess that is then fed back to the CRO and the sponsor, is that the normal kind of process?
That is correct, yes. Due to the very good categorization and scalability that we have in the system, we are closely monitoring not only the top three categories that we have shown today, but we are going into a more granular view, comparing of course one study to another, to understand what is the phase or how are the processes or what is the ramp-up happening in one study. So we can tie it back, not only the volume variations in the categories, but we are able to scale back the customer satisfaction result as well for the study. So we can understand maybe what is missing or what type of processes are not working, or what we need to do differently or follow other studies that are going very well and very smoothly to bring the others to that level.
Okay I’ve got a slightly naughty question. Maybe not. Most of the apps we use on our own devices, there aren’t support desks supporting those. So are we doing something wrong in the way that we design our ePRO apps that it requires this sort of level of support, do you think?
I would say, yeah that’s a good question. I would say I think that as I mentioned in terms of tailoring the study needs, we don’t necessarily put everyone to follow one global process. Everyone is coming with their own processes, and then in some of the cases we are adjusting in terms of what type of—including there are cases when we had included applications in the support that previously were not included. So yes, this is an option. And depending, I believe, on the study team and their desire, if they want. Because in many occasions they want to phase in some of the cases, they are ramping up and sort of applications support are keeping for in house, while they are adding more and more countries, regions, sites, then we are looking towards a shift. Yes, definitely in terms of efficiency and scalability, this is something that we can look forward. And yeah, we are looking forward to discuss more about this.
It was really interesting, thank you for the presentation. I have one question. I would like to know how the Helpdesk team is trained on each study. What kind of training support documentation you have.
So we actually have study specific training for each individual study that’s put together and delivered by the delivery teams. So the delivery teams who are developing the study, the questionnaires, etc. They’ll actually do the training with the Helpdesk that’s recorded and is uploaded as part of the eLearning and signed off as part of the agents training records. And then to support that throughout the study, we actually have knowledge base articles which again are put together by the delivery teams, and they can cover things like around the diary availabilities, so when to expect the diary to go off. There’ll be who to contact for an additional device. And when the calls come in, the library, basically there’s thousands of knowledge base articles are available to the agent, and they’ll look it up based on the study number and then follow that. so it’s sort of a QRG for them to follow. But it just provides support throughout the study.
Any other questions? Yes.
So thank you, it was great news, impressive numbers. So how many agents work for this Helpdesk?
As Hayley mentioned, we’re pretty much following the Follow the Sun. In-house we have 12 languages. We are present in eight delivery centers, we have Brussels, Paris, Cracow, Romania. In Romania actually we have now three different centers in different locations. Southfield, and we have some local teams in China—Dalian and—I can’t remember the other name. And in total in terms of head count, we have in the range of 200. For the languages that we don’t have in-house, we are using TransPerfect.
Any other questions for the guys? I suppose one thing that just came to my mind, if you had to make one suggestion for a change, realistic or not, to what we’re doing in these eCOA studies to significantly reduce the number of Helpdesk calls you’re seeing, what would that change be?
To have a better, let’s say a more close look into the study ramp-up, how much time are we allocating for the site training and patient training. This has been some of their frustrations, that we have ramped during the call, and of course the availability of the training and knowledge base before the study is going live.
I imagine that resonates.
That was pretty much what I was going to say to be honest, because for the calls that do come in it, as you can see, a lot of them are related to how do I do this, how do I activate that. And at that point if they’re not trained and they don’t know how to do it, to have to go out of their way to phone the Helpdesk, they’re frustrated from the beginning. And we obviously do the best we can to support them through that, but if they have that support from the very word go, and that training, then they wouldn’t have to make that call in the first place.
Yeah I think that’s probably something that resonates with a lot of people in the room around, you know. Very often we hear this coming up in kind of disparate topics that, if only we had more time to train, if only we could do more training. So interesting to hear that, you ultimately see that reflected in the Helpdesk.
Okay, so unless someone else has any other questions, thank you so much for a really great presentation.
[END AT 33:25]