×

This Content is Gated. Please Submit to Continue:

First Name
Last Name
Company
Thank you!
Error - something went wrong!

The World's Best eCOA Body Map: How Patients Helped Improve Symptom Location Reporting

February 19, 2017

Capturing information on patients' symptom location can be extremely challenging for study teams and patients alike. The burden of ineffective and inefficient body maps creates risks that can impact data quality. We knew we could come up with something much better - for all stakeholders.

Backed by extensive usability research and, most importantly, the patient's voice, this new body map creates the ultimate user experience and delivers a more complete picture of symptom locations. Watch this on-demand webinar to learn key research findings and see our new body map revealed. We'll examine:

  • A variety of body maps used in the industry (the good, the bad, and the ugly)
  • User experiences with various body maps
  • A new body map that users unanimously agreed was easiest to use

FULL TRANSCRIPT

MODERATOR

Hello, and thank you for joining our CRF Health webinar, The World's Best eCOA Body Map: How Patients Helped Improve Symptom Location Reporting. Today we’ll be examining the different types of body maps used in clinical trials and also walk you through a case study we conducted that led to interesting feedback about body maps and how they can best be leveraged for symptom location reporting.

Today’s presenter is Paul O’Donohoe, who is Director of Health Outcomes at CRF Health. Paul is responsible for developing the company’s internal health outcomes expertise and supporting clients across a range of scientific issues that can arise during the course of a clinical trial. He came to us with a wealth of health outcomes experience, and a passion for developing the field of eCOA, and we’re excited to have him as our presenter.

So without further ado, I will pass this presentation over to you, Paul.

PAUL O’DONOHOE

Wonderful. Thank you kindly, Jackie, and thank you very much, everyone, for joining us today for this webinar. I hope all is well in your part of the world.

So Jackie indicated we’re going to be talking about some work we did with body maps today. And the topics we’re going to cover—first of all, to really set the scene, and just define exactly what a body map is, to make sure everyone understands what it is that we’re talking about, as well as exploring some of the challenges that we see with implementing body maps. Then we’re going to dive into the real main part of the presentation, which is sharing some results from some user testing that we did at CRF, developing a new body map solution, and looking to test it in users to really understand their experience of interacting with this kind of electronic solution. And we’ll be sharing some of the findings from that research, as well as discussing some of their implementations, for making sure you deploy body maps successfully, but also more broadly some of the implications they might have for eCOA. And then we’ll also be talking about some of the dos and don’ts or the body map musts, for when you want to use a body map in your clinical trial on an electronic platform. And then of course we’ll have plenty of time for questions and answers at the end.

So diving straight into it. Just to very briefly define exactly what a body map is, it’s very simple, it’s a visual representation of the body. But it’s a visual representation that a patient can interact with. And it allows them to indicate where they’re experiencing various symptoms. So for example, they can indicate on this graphic, on this visual representation, where they’re having pain, they can indicate where they’re having a reaction, for example a rash. And we typically see two different kinds of body map—one designed for indicating or selecting a specific joint, so for example, you want to indicate that you’re having pain in your elbow. But then we have a second kind, which is more focused on the surface of the body, typically things like pain or, as I mentioned, a rash. You want to select an area on the surface of the body, and we tend to see these two different kinds of body maps, but these are the two main kinds of interaction that you want to be able to build into your body map, having patients be able to select a joint or able to select the surface area of the body map.

What I’m kind of curious about is whether people on the call have any experience with body maps in general, so we just have this quick poll question which you can interact with on the screen, as to whether you have actually any experience at all implementing a body map in a clinical trial, whether on paper or in electronic. Okay, I’m surprised to see that quite a large number of people actually have experience using a body map, about a third of you have used a body map in a clinical trial previously. I thought it was going to be much less, so that’s really cool to see. I hope you’ll see some stuff that will resonate with your experience here, as much as for people who maybe don’t have any experience.

Some of the body maps we’ve seen previously as a company, CRF Health have varied widely in how they visually appear, and how they’re laid out, from very simplistic all the way through to really bizarrely detailed. These are some of the ones we’ve actually implemented in our system previously. And the challenge really comes for users from a number of different areas. You want the body map to be something that the user can really understand and get into, so to speak, so they can really understand the different areas of the body, so that they can easily assign an area they are having a symptom on their own body to the actual visual representation they’re seeing on the page, or on the screen. But you also want it to be laid out in such a way that it’s easy for them to interact with. And you also want it to be usable across a wide range of settings, whether that be in different countries, different disease areas, and whatnot.

[05:03]

So we’ve seen a huge variation in different body maps, but we felt we could probably do better than that, and really the key drivers behind this study that we ran was, first of all, wanting to create that image, wanting to create that image of the body, to allow users to interact with it. And we wanted to assess how users found that image, first of all. So did they identify with it, did they think it was a good representation of the body to interact with. But then we also wanted to implement that body map on a handheld and a tablet device, so we wanted to test it across a range of different device sizes and understand the usability of interacting with that body map. There was two key elements to what we’re testing were the actual visuals of the graphic, but then also the usability side of things, the actual interaction with that visual. And so we wanted to test that and use it. As well as the flow of how you line out that symptom report. So as well as getting participants to indicate where in the body they were having symptoms, typically you want participants to indicate exactly what that symptom is. That could be a range of different symptoms they’re having. And so we wanted to explore how best to implement that electronically also. We also wanted to get feedback on some of those existing body maps that we previously implemented just to kind of get a better understanding of people’s thinking around these body maps.

And so we set up a nice little study, just a small scale pilot usability study. We looked at 12 members of the general public. We wanted the results to be as generalizable as possible because these body maps can be used across a range of different therapy areas. So we decided just to focus on the general public. Got an equal mix of male and female. And we got a reasonable range of ages, 22 being the youngest and 69 being the oldest, which isn’t too bad in regards to representativeness when it comes to ages. But most importantly of all, we managed to get half the group who rated themselves as low-competency users of smartphone and tablet devices, and this is really important because we wanted to really be able to understand the usability from the point of view of people who might not have much experience interacting with screen-based devices. It’s becoming increasingly difficult to actually find users who do rate themselves as low-competency with technology as technology permeates more and more aspects of society, so we were really happy to get half of our users actually rating themselves as low-competency.

Another important aspect of the usability point of view was that we got three participants who had moderate arthritis in their hands. This was important, number one, because to really maximize the usability of a system it has to be usable for those who have all kinds of impairments. You have to make it usable for people with impairments, also usable for people who don’t have those impairments. And secondly, arthritis is also an area where we see body maps used, so we figured it was reasonable that in the future in clinical trials, participants who had arthritis in their hands might be interacting with a body map, so we thought it important to include some of them in the sample as well. We were really happy with our sample, we thought it was nicely representative, and we were eager to see what they had to say about our system. But first of all we wanted to just run the existing body maps we’d already implemented by them to try to get feedback to see what really mattered to the users and what really jumped out at them when they were asked to interact with them.

Unsurprisingly this stick man figure was deemed as too abstract and couldn’t be identified with the body map. Interestingly this kind of cartoony figure was deemed as too detailed, there was too much body detail shown on the graphic, and people just felt that was a bit strange to be honest, and didn’t particularly like that imagery. This kind of more simplified line drawing was deemed to be a bit difficult to use because all the limbs were kind of very close in to the body, which made it difficult to interact with the different areas of the body, for example if you wanted to choose the left knee, or you wanted to choose the right hand for example. The body map was just a bit too compact and not user friendly enough for a participant to interact with. And this one was basically just deemed as too weird looking, one of the comments that we got back from the users was that it looks like a plastic Ken doll, and it was just deemed as not being a very pleasant visual experience to interact with.

[09:48]

And so based on all that, we knew that the body map we developed really needed to hit a few key things. It needed to be relatable, so you needed to be able to project yourself into this body so that you could easily translate the symptom you are having on your own body into this graphical representation. But there was a kind of balancing act that we’re trying to get there because we didn’t want to be too relatable because we wanted it to be usable and applicable across a huge range of geographies and a huge range of different therapy areas. But we were trying to get that balance between something that you could easily relate to, but that’s something that a huge diverse range of people could relate to. We knew we really didn’t need to get it hugely detailed in regards to anatomy. We just need it to look appropriately human, but we didn’t need it to the fine level detail. And the flip side of that as well, it couldn’t be too abstract either, it couldn’t just be kind of a stick figure rendering. And we really wanted to make it neutral in skin colour, gender, and the general layout of the body to make it as generalizable as possible. These were kind of the key drivers behind our creation of the visuals for this new body map.

And so what we came up with was this. At first glance, perhaps a bit unusual looking but it really ticks all those key drivers that we had in regards to being, obviously, very quickly identifiable as human and relatable. But it doesn’t really have anything that makes it very specific to, for example, the specific weight or specific gender. So it’s just a general humanoid figure, nicely laid out, the limbs are quite spread, so you can interact with different aspects of the body easily. We created a whole range of different slices and shots of the body, including up-close images of various aspects of the body, including the—you can see the foot and the hand there, but we also have the head, we have a back image. And so we had all the various aspects of the body that we wanted to test out in the actual user testing. So really the kind of fundamental driver we were trying to get at was that it had to be visually relatable but not weirdly detailed. We think we kind of hit it off with this initial image, but obviously we needed to try that out with natural users.

So this was the first key about getting right the actual image, and then the second piece was the actual user interface, how you interact with the body map. And I mentioned that there was two common key things you’re trying to get at with these body maps often, the joint and the body area. So we developed those two different ways of interacting. So on the lefthand side you have a joint selection body map, and on the righthand side you have that body area selection body map. So the exact same image, it’s just divided up differently in how you actually interact with it. Just to say, this is the handheld version, and all the screens you’re going to see through the rest of this presentation are actually the handheld implementation. As I said, we also did a tablet implementation. It looks very very similar, it’s just much more spread out because obviously you’ve got much more screen space. We were in fact most worried about the handheld implementation because obviously we were worried it might be a bit more crammed, a bit more difficult for users to interact with. So that’s what the focus is really going to be on, for the rest of the presentation, but the slate version looks very very similar, just a bit more spread out on the screen.

And so this was kind of the template user interface that we developed. As I said, on the left, the joint selection user interface and on the right a body area selection user interface. We made it very clear which was the left and right side, just so the patient could easily orientate themselves to the body map. And we provided that little guide in the bottom right, showing them areas that they could select and areas that were not selected, so it was really really clear exactly how they were meant to interact with the body map.

That’s only one aspect of it, though. There’s also the actual symptom reporting. So as well as getting participants to show you where on the body you’re having a symptom, you’re very often asking them to tell you what that symptom is. Some studies you know what symptom you’re asking them to report on, it might just be a rash, so you might not need these kind of questions. But for more of the studies there might be a broad range of symptoms the participants will be having. And so we needed a way to capture that. And so we had a few different ways of getting patients to input that data, quite straightforward screen where they select all the different symptoms that apply. And a slightly more detailed screen on the righthand side, which shows the area that they selected on the body map, a really nice graphical reminder that you selected your right forearm or your right upper arm, as the area you’re having symptoms, now please tell us what those symptoms were.

[14:53]

And the reason we wanted those two different screens was because there was two different ways of capturing the information from the patients in regards to where and what symptoms they were having. We could first of all ask patients what symptoms are you having, indicate all that apply. Obviously the example on the top that you see on your screen now, if they have no symptoms they just select that and they skip right to the end where they exit. If they do select that they do have any symptoms—pain, tension, swelling, for example—they’re then brought to the body map where they actually indicate where on the body they’re having that symptom. And the screen updates automatically to say, where on your body are you having swelling. If they have more than one symptom they’ll be brought to another body map screen which would say, where do you have pain, for example. And so that’s the first way of interacting with the body map. You say what symptoms you have, and then you show on the body where you have those symptoms. The other approach is the exact opposite, obviously, where you cay where on the body you are having the symptoms, and then you’re brought to a screen that says, okay you said you have a symptom on your head, for example, or on your face. What is that symptom that you’re having, please select from this list. And so we really wanted to understand what was the best way, what was the most intuitive way for participants to interact with and report that symptom detail, was it ask for the symptoms first and then the location, or ask for the location first and then the symptom.

We also implemented a zoom function. So for example if they chose a specific area of the body you could actually zoom into it, give more detail to allow you to get even more fine-grain detail. Just for example, you could think in an eczema study or a psoriasis study or even a joint study for arthritis of the hands, where you might really want to drill into very specific detail and get down to a specific knuckle on a patient’s hand to try and understand exactly where they’re having that symptom, that’s not something you’re going to be able to do on a full body map. So we implemented this kind of zoom feature, where if they selected a certain area, we would then bring them into a more detailed closeup of that area where they could go into more detail about where they're having that.

And so these were actually all the things we wanted to test in the users. We wanted to understand the visual, we wanted to understand the imagery, but we also wanted to understand their interactions with that visual and the flow through reporting symptoms and reporting where in the body they were having symptoms.

And so what did we hear back? Well, in regards to the actual imagery, the feedback was really really positive. We were very happy with the feedback we got in regards to that people felt it was quite gender-neutral, we didn’t really assign any particular gender to the visual. They thought the body was nicely proportionate in regards to being able to interact with all the bits of it, selecting different areas on the body. It was nicely laid out, the arms and legs were nicely spread out away from the body. And they were happy with the level of body definition or detail, enough that you can kind of orientate yourself in the body but not too much to make it just a bit weird, visually, looking. And they felt that the skin tone was nicely neutral not necessarily the most natural skin tone but it was just deemed as a very neutral grey-beige skin tone. And most importantly of all, we obviously asked all users how they were to deal if they had to interact with this visual, this body map, over a number of months if they were participating in a clinical trial. All users indicated they’d be very comfortable interacting with it.

In regards to the actual user interface, particularly you go into these usability studies with the assumption that your solution is going to be completely torn apart. You think you’ve thought of everything, but then when you actually get out in front of users, they start picking holes in it. We were very surprised that in fact, really all of the feedback we got was very very positive, we seemed to get a lot of actionable feedback, we had to go back and make updates and retest. But really the feedback was hugely positive, which was surprising but really nice to hear.

Here are some of the kind of representative quotes we heard from people. “Even a child probably would have been able to understand it.” “This is very straightforward.”  “Delightfully straightforward . . .I don’t know it could possibly be simpler.” And this is what we’re hearing across the handheld and tablet implementation. And we got users to pick out, they were presented a list of positive and negative words, and they were asked to pick out the top five words that they thought was representative of the body map that they were presented and their experience using it. And again that was all really positive: “straightforward,” “simplistic,” and “understandable,” really popping out for the handheld device. Those with good eyesight will spot a “frustration” hidden in there. And I think that’s related to the wording of the questions we used, which I’m going to get back to later on in the presentation. But overall, really really positive feedback in regards to the experience interacting with the handheld device. Even more positive for the tablet device. As I said, that’s very similar in visuals to the handheld device, it’s just a bit more spread out, but the body map itself is obviously bigger. And unsurprisingly, we heard from participants that they deemed it as ‘easy-to-use.” With the tablet device, even more positive feedback compared to the handheld device. So this was really really nice to see, quite surprising as I said, but really really good to get the feedback.

[20:36]

In regards to the symptom and reporting flow, the vast majority of users said it was more intuitive to them to choose the body area first, and then go on to report symptoms. So this would be an example for a joint assessment, where you would indicate, for example, you had a symptom in your right hip, and then you’d be brought to a screen that first of all reminds you, you said you had a symptom in your right hip, both reminding text but also with a nice little visual of the area you chose. And then afterwards you say what symptoms you’re having in that specific area. And this is what patients said was most intuitive to them. They could report symptoms first and then report where on the body they were having the symptoms. But again and again, you heard them say it was more intuitive to say I’m having the symptom here, and then be brought out exactly what that symptom is. And that actually applied to that zooming feature as well. It made more sense for them if we asked them to indicate specifically where in that zoomed-in area they were having the symptom before being asked exactly what that symptom was. So this particular example they’d be choosing the left hand on the full body, then they’d be brought in to that zoom-in of the left hand, where they’d pick that knuckle, and then they’d be brought in to the screen that asks them to say exactly what is the symptom they were having there.

So overall, really positive feedback on the visual, on the graphic we were using. Really positive feedback on the usability of the symptom. And again, we do go into these usability testings, expecting to get deflated by users. We were particularly concerned about the handheld device just because the screen is obviously smaller, and you’re asking participants to interact with quite small touch points on the screen. But all participants, including the older participants and those with arthritis in their hands, reported as really easy to use, which was really great to hear.

But there were a few impacting observations though of course. It was great to hear from participants. One of the most interesting ones I found was in regards to the zoom feature we have. This was for the hand. They zoom into the hand. We implemented that, as you see in your screen there, because that’s just the most intuitive, the way the body was laid out and the full body has the hand pointing down. But a number of users, when they got to this screen, were a bit thrown, because this is obviously a left hand, but it’s not how they might see their left hand right in front of them. And in fact we had a number of users who held up their left hand and then turned it upside down to match the visual on the screen before actually reporting where they were having the symptoms. And so, I think this is a really interesting lesson, you maybe want to flip that visual the other way around. That would be more natural to how it’s laid out right in front of the patient’s face, so to speak.

Something that’s much more generalizable than just from body maps, and as a health outcomes guy who worries a lot about questionnaires in general and PROs in general, really interesting feedback we got was about the content on the screen, and five of the users consistently basically skipped over the question text and kind of dived right into interacting and responding before fully understanding exactly what it is they were being asked. And so this would tend to lead to kind of moments of confusion. So they’d immediately begin interacting with the body map, before kind of stopping and going okay, what exactly is it I’m being asked here. And I think this is probably an issue we see—I certainly haven’t seen any research in regards to how common this is across other PROs, but I would not be at all surprised if it’s—and I think we probably all recognize it to a certain extent in our own interactions with smartphones and devices, that we very much skim read a lot, and when it comes to these kind of clinical trials, particularly where you’re asking participants to interact with many many screens’ worth of questions, skimming over the question and kind of jumping to the response options and trying to—not even consciously—just trying to understand the question based on the response options. I think that’s quite a natural reaction for people, and I found that particularly interesting because we ran into this consistently with these five users who were constantly kind of skipping over that question text. And it was a pretty representative sample, I don’t think there was anything unique about how this body map was laid out that made it particularly difficult for users to miss what was at the top of the screen, and it’s really got me thinking around is there things we could do to better call out that stuff that we really want participants to focus on from the very beginning to kind of orientate themselves to exactly what is being asked here.

[25:54]

I’m talking to my usability colleagues here in CRF, they’ve highlighted the fact that it’s a very common issue that we see with screen-based content is the skimming issue. Something we’ve been exploring, again, not a body map, but this is the PRO-CTCAE. We worked closely with the instrument owners to develop the PRO-CTCAE on a range of different devices. This is a tablet implementation you see here. So one of the things we did to really try and focus users’ minds on exactly what they’re being asked is to call out the question there at the top of the screen in a different colour as a way of drawing attention to it as soon as they get to this screen, and to really kind of focus users on exactly what it is they’re being asked to do. We haven’t been able to usability test this yet, but it’s something that really kind of drew my attention to the fact that there’s things we could do, and it’s something I’ve spent quite a lot of time railing against, that when we’re developing electronic versions of questionnaires we’re not just copying the paper implementation, we should be taking full advantage of the technology and what we can do with the technology, and really for example calling out that content to really make it the best user experience possible and to really get that high quality data and responses from participants where possible. So this is just one approach to try to address that and we’ve been exploring at CRF and I think and area we’re looking to explore in more detail in the future.

Another issue we ran into, again, this is going back to much more specific things relating to the body map is in regards to trying to pick a very specific point for the symptoms. The majority of users said that they would like more fine-grained options, for example, the way we have it laid out where you could just pick the entire neck. And they were saying well what if the symptom is on the left side of the neck. Some users also said their symptoms spread across a few different selection areas. And that’s something that’s easily preventable. And some of the users raised the issue that they would have liked a back image as well. A lot of this is just down to the way we decided to implement this usability test, we just decided not to test the back image. There was quite a bit of different content, we wanted to test the participants, we didn’t want to keep them there for longer than an hour, because attention tends to wane after an hour, as you surely can appreciate. And so we just didn’t test the back image, we didn’t provide more fine-grained options. We had just quite gross body map choices. But it kind of highlighted the fact that if you are implementing this in a clinical trial, you do need to think very carefully about the expectations of the users. If it’s going to be large broad areas on their body that they’ll be selecting, quite distinct but broad areas, then you don’t need to go into that fine level of detail. You can be very very specific. And if so, it might be useful to provide that specific area. It kind of ties into what you’re interested in measuring here. If you need to get into that fine level detail or if it can be a much broader—you can just have pain in their left forearm, for example, or do you need to get right down to the level of the wrist or the front of the wrist or the back of it. So it was just an indicator to think carefully about ensuring those selection points match the expected area of interest for your specific study. And again, the selection points that we implemented for this test are all adjustable, so we just decided on these quite broad selection points, just for the purpose of the usability test. But they’re all adjustable as needed for a specific study.

[29:56]

 Another key feedback was again we implemented just quite generic questions, just template questions. We weren’t testing on the specific patient population or a specific therapeutic area, so we weren’t using very specific questions, we were just asking participants to choose an area of the body, we gave them hypothetical scenarios to enter. But we did get feedback on wording in regards to how exactly you phrase what it is you’re looking for from the participant. And we had a few participants kind of hit that moment of confusion, that’s really what you’re looking for in usability testing, you want to see participants pause for a moment, because they hit something that breaks that flow of working through the screens. And so we had that a couple of times where participants just weren’t 100% sure what it was they were being asked to do, particularly in regards to specific joints or specific body areas where they felt in fact it might cover a number of different joints or body areas, or what was presented on the body map as the selection area didn’t match the specific area they were interested in selecting. So the really simple solution that was suggested by a number of people was just, add in “or near” to the instruction, so select the joint at or near the selected location. That kind of really opens it up, allows participants that freedom to not have to kind of struggle with, oh okay it’s kind of near my left knee, it’s not exactly my left knee, is it going to be okay if I say my left knee here, because that seems to be the only choice I have. So if you include something like “or the nearest" that gives them that freedom to go with the left knee and not get hung up on having to get it exactly spot on. Again, very dependent on the study and what it is you’re looking for in the clinical trial. You might need that really specific level of location detail or you might not. But just to bear in mind that that wording, getting that important and giving people that flexibility to word the question.

So the key takeaways from this work again, we were delighted with the feedback we got. Pleasantly surprising, hearing how positive people found the visuals and the user interface. But the images should really be simple. Not too simple, you don’t want to get into that kind of abstract stick figure level, but you need to keep images as simple as possible while still being relatable. The ability to zoom for easier selection was very important and was deemed as a really good implementation by users, particularly if you’re getting to that level of where on your face, or where on the torso, where in the hand, you need to be able to zoom in rather than trying to have them pick out a very small area on a much larger body map. And being really careful when you’re defining the areas of selection and the questions to be asked for users, getting that balance right between the level of detail that you need to answer the clinical question versus giving the participants the freedom to select multiple different areas. In theory you could have participants basically select any area on the body map, but that’s not necessarily going to give you the best data. You still need to define, does that selection relate to a hip, or does that selection relate to a right elbow. And so being careful defining those areas to understand what level of detail to get into to to ask the clinical question, but also giving the freedom so that patients aren't running into that moment of hesitation because they can’t select the third knuckle on their left hand. And testing, again highlighting the importance of testing. Like I said, typically you would get a lot of actionable items back from usability testing to go away and make updates. We didn’t get so much actionable stuff from this, we just kind of got confirmation that patients found it usable and user friendly. But it really can tell you a huge amount. You think you’ve answered all the questions, you think you’ve done everything in a very intuitive way, but because you’ve been staring at it for six months, you’re kind of missing the obvious, and that's the stuff you get back from users when you go out and test it.

MODERATOR

Thank you so much, Paul, that was a really great presentation and really great information. So you’ll see here, we have our resource hub listed here. You can go to resources.crfhealth.com and you can find presentations like this one, webinars, ebooks, white papers, infographics, and case studies, with a bunch of information about eCOA. If you’re just getting started or if you’re more advanced in the field and looking for information, we’ve got a list of resources there. So at this time I’ll pass it back to Paul, and we’ll start the Q&A.

[Q&A section starts at 34:55]

 

 

Previous
Visibility and Oversight: What Paper Informed Consent Isn't Offering You
Visibility and Oversight: What Paper Informed Consent Isn't Offering You

Next
6 Key Considerations for Complete Paper to eCOA Migration
6 Key Considerations for Complete Paper to eCOA Migration