[SPECIAL FEATURE]
HOW FLAWED RECALL & MEMORY BIAS POLLUTE MARKET RESEARCH AND WHAT CAN BE DONE ABOUT IT
Dialsmith is publishing this series of articles in partnership with ESOMAR’s Research World Connect. This series is part of a broader program, developed and sponsored by Dialsmith, centered on exposing the challenges around recall- and memory-bias in market research. The program consists of a series of discussions with experts from the market research and academic communities and features a live panel session at IIeX North America and a webinar later this year.
PART IV | CAN WE FIX THIS?
Until now, our series has focused on why flawed recall and memory bias is an issue for market researchers and the impact it can have on outcomes and the industry as a whole. In this fourth installment in our series, we shift to what we can do to address these issues and mitigate the impact of flawed recall and memory bias on market research.
First off, I have to admit we’ve been pretty hard on memory. Memory is not the culprit here. The culprit is our false expectations of memory and what it can deliver for us. Imprudently, we’ve become dependent on memory to relay accurate and detailed accounts of how people were thinking or feeling during some previous experience or earlier event. In our search for answers, we need to be clear that we are not talking about fixing memory. We’re talking about fixing market research.
This distinction is important because not all memory is the same. And contrary to much of the experiences we’ve discussed up to this point, some memory can, indeed, be accurate. As our expert Andrew Jeavons notes, “Memory is not monolithic. We have semantic memory and we have episodic memory. And in the case of semantic memory, if you were to ask me to tell you what Febreeze is, for example, I would likely tell you the same answer a month from now as I would tell you today. So, I think there are things you can ask people about certain brands or products that tap into a different knowledge-base, which is likely to be more accurate.” Our academic researcher and memory manipulation expert Dr. Elizabeth Loftus agrees. “Yes, when dealing with semantic memory, it is more accurate as you’re dealing with far less influences or interferences.” So where memory is concerned, we can’t simply throw out the proverbial baby with the bathwater. What we can do is conclude that not all memory-based questions are bad. If we want accurate accounts from people, we, as market researchers, need to ask the right questions—those that tap into respondents’ semantic memory.
While we’re giving memory a bit of a reprieve, what about memory bias? Is it always a bad thing for market research? Does fixing this issue mean that we need to do away with all memory-based methodologies? I would argue no. Memory bias and external influences are present in the real world and play a role in influencing consumer and viewer opinions and decisions. We need to account for this as well as balance it with peoples’ untainted, visceral reactions. Both are required to get a clear and true picture of a person’s experience. There are a number of approaches that allow for this—dial testing (my expertise) being one of them. Dial testing continues to be viable because it allows you to capture what each person is thinking, individually, in the moment and then move into that group discussion. Decisions can then be based on what individuals quantitatively told us they thought in the moment and what the group discussion unveils for us as to what might then happen in the real world. In our work, it’s the combination of those two that’s really so powerful.
Our research expert Elizabeth Merrick, Head of Customer Insights at Nest, has the most real-world experience of anyone on our panel in exploring alternative methods and tools that mitigate the risk of memory bias and flawed recall. She notes that, “People are bad at relaying what they did, what they are going to do, and even why they are doing what they are doing at the moment. So, it’s on us as researchers to stop directly asking a lot of these questions and start designing well-controlled experiments to observe these answers.”
As far as methods and approaches to do this, Merrick offers, “I’m a big fan of simulations—give people tasks and see what they do. I do it through conjoint, saturation/deprivation, eye-tracking, dial testing and a lot of predictive modeling. When done right, these approaches can reveal latent patterns that respondents either can’t articulate or don’t even notice. It requires some statistical rigor, which I think, at times, has scared off a lot of researchers who have relied upon traditional qualitative methods to answer these questions in the past. The good news is that recent technology is allowing us to do more with qualitative simulations, so I don’t want to suggest that all research needs to be a quantitative robust study. One example would be virtual reality headsets that allow shoppers to browse the aisles of a concept store—where we get to observe what grabs their attention or what they miss. I’ve seen this done with simple ‘spy glasses’ in a real store too.” Merrick also mentioned that she’s seen good results in using secondary data to validate points whenever possible.
Dr. Loftus, who in the course of her memory research actively introduces external factors and interference to her study participants to gauge their susceptibility to false or inaccurate memories, suggests that providing some sort of “heads up” to participants prior to a session can help them retain a more accurate accounting of what they did or what they saw. “This is something we’ve used on occasion in our studies where we’ve tested telling participants to, ‘Be on alert,’ or, ‘You may want to watch out for so and so.’ And we’ve seen that have a beneficial impact on their recall abilities.”
As we’ve heard here, there’s no lack of technology, nor tried-and-true methods, to help us combat issues related to memory bias and flawed recall. We’ve got new and even well-established tools at our disposal. But the real fix to this issue involves a drastic change and one that maybe harder to implement than developing some new technology. It’s a fix that requires an awareness and an urgency to uproot a long-established mindset. As Jeavons puts it, “We’re going after a central dogma of market research here. Market researchers need to abandon the belief that they can simply ask people questions and get answers they can trust. It’s a hard fight but there’s too much at stake not to fight it.”
In our next and final post in this series, we’ll give a wrap-up from our team’s live panel session at IIeX North America. I’m anticipating some fireworks when we get our team of experts together to faceoff in-person on this topic, so stay tuned.
PART V | “MEMORABLE” HIGHLIGHTS FROM OUR IIeX PANEL