Presented by Paul O'Donohoe, CRF Health
We wanted to present some work we’d done that just wrapped up in the last few weeks, exploring a very specific aspect, a very specific user element, you could almost call it, of patient-reported outcomes. Not a huge amount of them, but when you do, you certainly see a lot of interesting variability, and that’s specific what I refer to as body maps. Body maps being a visual representation of the body, upon which patients can indicate where they’re having pain, where they’re suffering a rash, for example. Basically it’s where they can highlight on the body, I am having a symptom here, upon the body. And it can be a joint, for example, or it can be a surface area of the body.
And there’s a huge variety of these graphics out there. This is just a selection of some of the ones that we’ve implemented before as a company. They go from the bizarrely abstract all the way through to the frankly terrifying. But as you can see, there’s a huge amount of variability in what’s used to try and capture this information from patients. So we’re asking patients to interact with these graphics to say, I am having pain here, for example. And we felt that there could be a better way of doing this, a better way of visually representing the body, first of all, as well as a better way of presenting this information to patients. So the approach we took was to develop a standard visual for a body map, having learned from all the work we’d done previously. And this was the graphic we came up with. We decided that it needed not to be abstract, it could be a bit more identifiable as human, but also not to swing wildly the other way and be hugely detailed. So we kind of developed quite a neutral graphic, quite neutral in regards to skin colour, in regards to gender, and in regards to the general layout of the body. And we generated these graphics of both the overall body, as well as specific aspects of the body. And we also designed a solution for capturing data using those body maps. And there were two key types of data that we identified that we wanted to be able to capture. That was joint pain—so which joint are you having issues with—which is the one on the left, you can see the selectable joints are highlighted. As well as surface area, body surface area, so for example a psoriasis study or a general pain study—I’m having pain in my forearm, for example. And that’s the option on the right, where patients can touch a specific body area.
We also wanted to explore the best way of capturing the actual symptoms that the patients were then reporting. So they said, I have a pain in my left elbow joint, for example, then we wanted to understand how one would report exactly which symptoms the patients were having in those specific areas. So these are neat solutions, very straightforward solutions we came up with, as well as some nice user element aspects for highlighting the area that the patient reported, so that we could remind them exactly what it is that they’re talking about.
So what we really wanted to do, obviously we developed these graphics and this flow of capturing data from patients ourselves, and we wanted to see if it actually made sense. So we decided to do some user testing—of course we’re very patient-centric within CRF Health—so we wanted to get some feedback on the actual image itself. I was particularly interested in this considering the vast range of images we were seeing. Exactly what will patients think of the image we’re presenting them. Will they want a super-detailed image, for example. Maybe they want to be able to get a gendered image, maybe they’ll want to be able to get an image that reflects their own skin colour, or maybe they’ll want something really really abstract, maybe they just prefer some kind of stick figure almost.
We also wanted to explore the bare usability, so we wanted to see how patients got on interacting with these images and reporting symptoms on a handheld and on a tablet device. Obviously, this is a key interest for us within CRF Health, how patients get on actually interacting with our systems. Is it a low burden, intuitive solution. But this is particularly the case with these solutions because they can be quite challenging, when you see them implemented. We’re often asking patients to interact with quite small areas on the screen, so we wanted to make sure that wasn’t excessively challenging as well.
And another key aspect that we wanted to explore was actually the flow of symptom reporting. Did it make more sense for patients to say, this is the area of my body where I’m having a symptom, and this is what the symptom is. Or would they prefer to say, I am having this symptom and this is where on my body I’m having that symptom. So quite a simplistic study, but quite a unique study as well, not something we’ve seen before. And it’s a very neat, specific use case.
So the actual testing we did, we decided to just focus on members of the general public, we wanted this to be as generalizable as possible. We had an equal mix of male and female. We got a really good range of ages, from 22 to 69. Most interestingly, from my point of view, was that half were self-reported as low-confidence users of handheld and tablet devices. Very challenging getting that kind of user in today’s day and age, particular in the UK, or I imagine it’s the same in America, finding people who actually say they’re low-confidence using technology. But we managed to get half the sample—six of the users—to be low confidence. And we also targeted three participants who had arthritis in their hands. Obviously one of the use cases for the body map is an arthritis study, particularly in regards to joints. So we wanted to make sure that the solution was usable for an arthritis patient as well. If they’re going to be using the solution, we need to test it out on that population.
So when we actually explored the images used, we also presented, as well as our own image, the images that I showed you at the very beginning of this presentation, because these are ones we’ve used before. And generally, the feedback we got was negative about these images. Patients saying, for example, the one on the left being too abstract. This one being a plastic Ken doll, looking like a plastic Ken doll—I’m not sure, does that translate to American, Barbie and Ken? Yeah, so this looking like a Ken doll. This one having too much body detail, which I think I can appreciate that point. And interestingly, this one having the arms too close to the body, it was too difficult for patients to identify specific points, and I’ll come back to that specifically in a moment.
In regards to our body map, thankfully the feedback was really really positive. They liked the fact that it was gender neutral. No one really said that it would be an issue for them using a gender neutral, or using a body map that wasn’t specific to their own gender. They really liked the fact that the arms were spread away from the body, so they felt it was much easier to interact with the body map because it was spread out quite well. So it was easier to get at those bits at the arm, for example, or bits at the torso. They were very happy with the fact that it only showed basic body definition or details, they didn’t want that really in-depth level of details that we’ve seen with some body maps. So it’s a pretty blank body map. There’s enough in there that you can orientate on it, but nothing too disturbing. And they were very comfortable with the neutral skin tone as well. Again, we were asking, would they prefer to use something that was more representative of their own skin tone, and while they said that would be okay—some users said that would be okay to be able to choose my own skin tone—they were perfectly comfortable using this very neutral skin tone as well. And they made it clear that they would be comfortable interacting with this body map in a clinical trial across a period of time. So all really positive stuff about the actual image itself.
In regards to the actual user experience, so interacting with the body map and interacting with the symptom reporting, again very positive stuff. “Even a child could have probably been able to understand it,” was good to hear. “Delightfully straightforward, I don’t think it could be possibly simpler.” That’s some of the most positive user feedback we’ve received before, which was very good.
We did a perception experience task. So we presented pairs of contrasting words—simple/complicated, easy/not easy—and asked participants to pick out those words that were most representative of their experience of using the device. This is the word map for the handheld device. So this is the one we were most concerned about. For the handheld device, think of kind of an iPhone 5 type size, relatively small screen size for fitting an image like that onto it. But really positive feedback from the patients. Straightforward came out as being a very important definition. Simplistic, understandable, easy to use, responsive, clear. All really positive feedback on the handheld implementation, which was the one we were most worried about. The TrialMax Slate, the tablet implementation, again we were less surprised that the feedback was very very positive, easy to use being the overwhelming response from patients.
So I was talking about that flow, so how patients report their symptoms and then report where on their body they’re actually having the symptoms. So there was two different flows that we presented. It was: talk about your symptoms and then say where on the body you’re having the symptoms. Or the alternative of: say where on the body you’re having your symptoms, and then say what those actual symptoms were. Again, we had this reminder here for, you chose your shoulder, please tell us what’s symptoms you have on your shoulder. And the feedback we got was that the most intuitive for users was in fact to report on your body first. This one kind of surprised me, it wasn’t the choice I probably would have made, but consistently this is what we heard from users. We want to say, I have an issue with my shoulder, or I have an issue with my hip. And then you are brought to a screen that says, you reported that you have a symptom on your right hip, what is that symptom. And just to be clear, these symptoms obviously are going to change on a study-by-study basis. These are just very common symptoms we included within the test version of development, just to have something to use for the patients. But we wanted to think of hypothetical situations in which they might want to report these symptoms. And they consistently said they want to report where in the body they’re having their symptoms, and then exactly what that symptom is. Really interesting and useful finding.
Another thing we wanted to test was a zoom-in function. So obviously, on this particular body map, the level of detail that you can go into is not that great. You can pick a whole foot or a whole section of a leg or a whole hand. But you can’t necessarily drill down into the details of which knuckle on your hand are you having symptoms. So we also wanted to explore the possibility of going into more detail, so zooming into a specific area of the body that would then be able to be interacted with in a similar way to the full body map. So on this particular version of the hand, you could interact with each of these. Again, you can adjust that on a study-by-study basis. So we wanted to get feedback again on what made more sense to the patients for responding to that. And again, they said in this particular instance, they prefer to be able to choose the detail of what symptoms they’re having within that specific area first, and then say where on the hand that they’re actually having that symptom. But the most interesting feedback—I think one of the most interesting feedback from the entire study— was they actually felt that the hand was upside-down. So basically we just tried to replicate what was happening here, in the visual, and then literally envisioning it as zooming in. But patients, when they’re using the image, we actually had patients twisting their hands like that, to line it up with the image on the screen. Again, a really nice example of the kind of feedback that you only get when you sit down with patients and ask them to interact with something you’ve designed. We thought it made perfect sense that we’d just be blowing up a section of the body map. But in fact, from the usability point of view, it made much more sense if we rotated the hand around, so that as they were seeing it in front of them they could more easily identify the area of the body.
So that’s generally about the image and the usability and the flow that patients preferred. But some very specific feedback that we got that—and again to give this context, we obviously took a very kind of generic questions and symptoms because we wanted this to be generalizable, but this feedback is worth bearing in mind when you’re kind of designing studies to capture this kind of data and to use these kind of solutions—is that some users ran into moments of confusion when they were trying to pick a very specific point for their symptoms on the body map.
So for example, with the hand, we had some users, while trying to think of their knuckle, kind of humming and hawing about, well it’s kind of about there, or should I choose above or below it, or should I choose both of them? And again, some users felt their symptoms also spread across multiple areas of the body, so it wasn’t located on one spot of the body, they would have liked to highlight the entire arm, for example. And as I said, within this particular study we didn't want to get hung up on those kind of things, but I think it really highlights the importance of when you’re designing these kind of solutions for a specific trial, having a really good understanding of exactly the areas that the patients might raise as being important for them. So if it’s a case of they’re just going to have something on their arm, and that’s all you’re interested in, then you know, fine, just to be able to choose the entire arm. But if you really want to be drilling down to those very specific areas of where on the arm, then you might need to divide up the body map in a different way.
And as well, the wording of the questions I think are important, and just making clear to patients that, you know, they maybe don’t have to get hung up on being super accurate. So the question as we phrase it in the test was select the joints at the location you are having the symptoms. And as I said, some patients got a bit hung up on that because they were really trying to identify the very specific point, where if we’d included at or nearest the location they were having the symptom, it might have just freed them up a bit cognitively to be able to go, okay I’m having symptoms around about here. And assuming, for your particular study, that it’s okay having that slight ambiguity about the very specific point you’re having pain, I think that’s a neat solution, just ensure that patients are much more comfortable and have an easier time and don’t have to think too hard when they’re using the solution.
And I think this issue of thinking too hard is actually a really fundamental point that was highlighted within this particular study but applies to everything really we’re trying to do with eCOA systems or capturing data from patients within eCOA. Five of the users consistently ignored the actual question stem and went straight to interacting with the symptoms and interacting with the body map. And in talking with our UX colleagues, who are very used to designing solutions for websites, designing solutions for mobile phones, they say this is a standard response within UX, within usability systems, is that people ignore what they're being told to do and go straight to interacting with something that they see as inter-actable. So we have patients who would start playing around with picking aspects of the body, and then go, actually what am I specifically being asked to do and then go back up and read the question. Now it’s not a big deal, it doesn’t stop them responding to the questions accurately. But what it does is it makes them pause and it makes them think, and one of the key tenets of good quality usability is don’t make the users think, just have a really intuitive flow so they provide you the data they want without having to stop and start all the time. And this really got us thinking to beyond this specific case, because this question on the left could, you know, without the image, could be one of the PROMIS questions that Michael showed us earlier, it could be any of these questionnaires that we’re commonly administering on these tablets or handheld devices. And I don’t for a second think that this finding is specific to this specific question, that people are skipping the question and going straight to the response options and then maybe going back to the question to remind themselves exactly what they’re meant to be responding to. I think there’s some interesting work to be done about exactly how we can call out the really important information in these questions in a more intuitive way. I think there’s a lot of work that we can explore around highlighting different colours, having a block around the question so it really catches the eye as soon as they get to the screen so it’s the first thing they interact with visually, and then they move on to the questions, for example.
And I think this applies to all the questionnaires we administer electronically. And again it goes back to my rant earlier around just making paper versions of questionnaires on electronic platforms, and really the paper versions of questionnaires are typically literally black and white, not much going on from a visual design point of view. And I think there’s potentially a lot we can learn from web accessibility and web usability, where these questions have very often been answered, how do we best get information off the screen and into patients’ brains, and how do we then capture data back from the patients. They’re doing it in a very different context, but when you break it down to the fundamentals of what they’re trying to achieve, it’s very very similar to what we're trying to do. We're trying to get information into the patient’s brain and we’re trying to pull information out again. So how can we do that in the best, most efficient, most intuitive way. So we don’t have an answer to that, or I certainly don’t have an answer to that at the moment, but I kind of raise it as a point for consideration within the community as a whole, that there’s probably way more interesting things we could be doing from a visual design point of view than these quite stale black and white layout o of questionnaires that we’re doing for the time being.
So body maps, and in particular this body map we designed within CRF, are a really neat and intuitive way of capturing symptom location data. We’re really happy with the feedback. You run these usability studies and you typically get hammered with negative feedback, and that’s kind of what you're looking for because then you have something to work on. But we were really happy with the very positive reaction we got to this body map that we developed. Particular on the handheld device, as I said, we had some concerns about the usability from a handheld point of view, but it was all extremely positive.
I think really the takeaway message from the visual aspect was that it should be simple, not necessarily too simple, not completely abstract, it should be recognizably human, but you should keep it pretty simple, you should keep it gender neutral, you should keep it ethnically neutral, and you should not necessarily include vast amounts of physiological detail on the body map as well. Care needs to be taken when defining areas to be picked for the questions asked. So you need to think very carefully for your specific study exactly what areas are patients going to be having symptoms on or at, and ensuring that those areas are selectable within your specific body map. And all the selectable areas within our body map are adjustable on a case-by-case basis, so that’s pretty straightforward, but you need to know what those areas are that you’re going to have to make accessible to patients. And then the same with questions, you know. Can you allow a bit of leniency in the exact positioning of those symptoms, and if so, make sure those questions make it clear to the participants, you know, we’re not looking for the millimeter accurate point of where you’re having those symptoms. Just give us the general idea of where in the body you’re having the symptoms. And I think, as I said, there’s definitely further work that we can do to really drive the best quality data that we can capture from patients in the most low-burden and intuitive way, and I think that’s some of the feedback that—it’s not specific to this actual solution that we’ve developed but it applies broadly across all our electronic solutions—that there's still more work and we’re still learning the best way of capturing this data from patients. It’s not just the case of replicating the question on the screen. There’s a whole visual design piece in there as well that we’re not really taking the best advantage of.
And that’s it. That’s our body maps solution.
[END AT 21:48]