There are many things to consider when developing a questionnaire – the concepts of interest, wording of questions, recall period, response options, a Likert scale versus a visual analog scale versus a numeric rating scale, etc. However, an element that is often overlooked is the actual layout of the questionnaire, i.e., how the questions and responses that the respondent will see and interact with are actually going to look on the screen or page. For sure, great importance is given (or should be given) to ensuring that the questionnaire is easy on the eye and intuitive to follow; but how do these decisions impact the perception of the user?
A paper from Arizona State University published earlier this year (abstract and summary) highlights the power of layout on how users respond to questions. Researchers presented participants with two lists of symptoms; one list was made up of real symptoms associated with a type of cancer, while the second list had symptoms for a fictional type of thyroid cancer. All participants were presented with the same list of symptoms; however, they differed in how they were laid out. Some participants were presented with lists in which “common” symptoms (i.e. those we all suffer from, such as fatigue, difficulty concentrating etc.) were clumped together, while other participants were presented lists in which these common symptoms were interspersed with more unique symptoms such as “lump in neck”, etc.
When participants were presented with the “clumped” lists they were more likely to diagnose themselves with the disease being described. The researchers surmised that this is due to the fact that participants got a “run” of positive hits, making it appear more meaningful compared to a mix of positive and negative hits. “[I]dentifying symptoms in “streaks” – sequences of consecutive items on a list that are either general or specific – prompted people to perceive higher disease risk than symptoms that were not identified in an uninterrupted series.”
Researchers also found that the length of the list of symptoms presented had an impact, with participants less likely to self-diagnose with a disease when positive hits were part of a longer list. It was concluded that the effect was “diluted” when positive hits were mixed in with a longer list of negatives.
That the layout of a questionnaire can have a statistical impact on the data captured is not a newly recognized phenomenon. Two papers, Christian & Dillman (2004) and Tourangeau et al. (2004), neatly summarise some of the effects that can be produced by simply altering how questions look. Some of the highlights of their findings include the following:
- Linear (horizontal) response choices affect respondent behavior. Respondents are more likely to choose answers from the top row in a multiple column format. Nonlinear layouts such as double and triple banking, seen in examples A below, are often used to save space. However patients are more likely to choose from the top line of the banked questions, when compared to the linear options seen in examples B below.
- Nominal scale questions should have evenly-spaced response options as visual midpoint plays a role in how participants answer. For example, participants are more likely to choose “Possible” and “Unlikely” in Example A compared to Example B below.
- “Non-substantive” options can also have an impact on the visual midpoint of questions. When a divider line or space is used to separate so-called non-substantive options from substantive responses the visual midpoint of the scale falls at the conceptual midpoint, as in Example A (“About the right amount”). However, when participants are presented non-substantive options simply as additional radio buttons (Example B) they are more likely to choose “Too little” as the visual midpoint of the scale has been skewed.
Consistency of responses to the same questions differ when multiple items are shown on a single screen, divided across several screens, or presented one at a time. Participants utilize a “near means related” heuristic and items on the same screen are more closely correlated than those spread across multiple screens.
“Left and top mean first”. The leftmost or top item in a list of items is considered to be the “first” in some conceptual sense and the remaining items follow from left to right or from top to bottom in some logical progression. Participants take longer to respond to questions that depart from this heuristic. Participants may also use this heuristic as a “cognitive shortcut” to answer questions.
We might feel we are being completely rational and providing answers based solely on our subjective truths when responding to questionnaires. However, we are all vulnerable to the ironically invisible effect of visual layout. How things look can affect us in ways we don’t even realize. In the world of outcomes research the vague, subjective nature of the slippery concepts we are trying to capture require us to reduce to the greatest extent possible the impact of external influences, and the layout of the questionnaire is one aspect that should not be forgotten.
This potential impact should be considered during the instrument development phase and any effects should be investigated when measuring the instruments psychometric properties using appropriate quantitative research methodologies. However, this is also a key consideration when migrating instruments from one modality to another (such as from paper to electronic) and potential changes to the layout of the questionnaire on the new modality should be carefully considered. CRF Health has migrated more than 130 eCOA instruments from paper to electronic and conduct ongoing research on these issues, ideally placing us to support and advise our customers in these matters.
Please feel free to share your comments or contact me via email: email@example.com
Manager Health Outcomes, CRF Health
About the Author