Trackers and the balancing act between flexibility and consistency
Tracking can be tricky. Our clients rarely say it or express it this way, but it’s obvious sometimes that they see the tracking of KPIs as a necessary evil. Maybe the outputs become stale quickly, maybe it’s hard to think of anything new to say to key stakeholders, maybe the metrics haven’t changed at all, or worse, they have changed in a way that wasn’t expected or can’t be explained. All of these happen in the lifecycle of every research tracker at some point, but there are ways to mitigate against this and ensure that tracking can be fun – yes, fun – and that comes down to flexibility.
The really important stuff
Flexibility and tracking are rarely mentioned in the same sentence. This is mainly because the name of the game is consistency when it comes to managing a tracker. Generally, if a tracker is set up properly, the metrics won’t change by more than 10% in any given wave – big shifts like this with a nationally representative sample are usually a sign that something is wrong with data collection or questionnaire design.
Two things we always take into account when setting up the questionnaire for a tracker across multiple waves:
Question order: The order in which questions are asked can have a striking effect on the scores. Participant priming is a real thing and how a participant answers a question is heavily informed by their experience of the questions before it. We (and our client) learned this the hard way when we were asked to include some usually tracked metrics on an omnibus study instead of within the usual context of the tracker survey. The scores dropped by about 20 percentage points. For this reason, we will never change the order of the questions, remove questions or ask them out of context, unless we have an extremely good reason.
Consistency of wording, scales and brand lists: Even changing a few words in a question can change how it’s read and how the participant fills in their answer. “To what extent would you be likely to consider buying…” and “To what extent would you be likely to buy…” read effectively the same here, but place the participant at a very different stage of the buying process. Likewise, changing up a brand list can impact on comparability – for example evaluating H&M vs. Gucci and Ralph Lauren is likely to see H&M perform poorer on certain measures than if you compare it against other high street brands like Zara and Mango. And the ‘simple’ addition of a “don’t know” option to a scale – or even changing that scale so that it reads down rather than across – can change scores significantly. We need to have a very good reason to make any of these changes and they need to be well-documented.
And that’s just the questionnaire – we won’t get into the nitty gritty of ensuring consistency of quotas and sampling here!
Tracking doesn’t need to be dull – the benefits of a flexible approach
There’s a time and a place for bending the rules and making trackers more flexible…
During the pandemic, we found that a couple of our trackers that were primarily designed to track consumer sentiment did something strange – they started to fluctuate wildly from wave-to-wave… which is perhaps unsurprising given that our lives and the news cycle had also started moving wildly in the same way!
One of our clients in particular, in the tourism industry, recognised the immediate need to capture and track public sentiment to keep a close handle on how consumers’ likelihood to travel in the coming weeks and months might shift and when they could start to move the dial in terms of starting to show more travel, indoor experiences, and unmasked people in comms. We had questions around how people saw the current Covid situation and how likely they were to travel in the next 3-6 months which fluctuated significantly across quarterly waves.
We stretched some typical rules of tracking to manage this. For starters, we completely deprioritised certain sections or removed and re-added them whenever it was called for, and we added follow-up questions for specific questions regularly. We also switched to conducting research waves in the aftermath of significant moments in the Covid cycle. All of this meant that our client was armed with a wealth of information and recommendations at key moments that could help them to make decisions about how best to communicate with consumers.
We’re unlikely to have to conduct research in quite such a volatile situation again. But brands and businesses are constantly having to react to changing trends in the market, new methods of communicating with their customers, and new tech generally. For example, there will always be “hot topics” that you want to learn more about that perhaps don’t merit a survey (or investment) on their own. There will likely also be questions that are only important to track annually rather than quarterly or monthly – the business likely doesn’t need to know everything, every wave. The way we build trackers is set up to accommodate all of this – we know that a business is not a monolith, and we design our research to ensure that you can get the most out of your tracker. There should always be room to change things up (within reason!)
Getting the balance right
Combining flexibility and consistent tracking in research projects can be challenging, but it’s crucial for delivering valuable insights without compromising data integrity – a tracker is still a tracker. We need to ensure that consistency is maintained to keep the tracking of KPIs as robust as possible. In many ways we are walking a tightrope between adaptability and reliability.
One danger is that the “flexible” section – being the juicy bit that tends to lead discussion in presentation of the findings – ends up overshadowing the tracking of KPIs to the point that the tracker becomes a series of repeated ad hoc studies. While this makes things more “fun”, it leads to baggy and unfocused questionnaires and debriefs – we end up moving away from the original objective of tracking in the first place. The all-important metrics go on the backburner, and we might even miss a downward trend or issue that could be important to address. For this reason we suggest keeping the tracked metrics front and centre and only allotting say 3-4 minutes of the questionnaire for flexible questions. Asking too many ad hoc questions also means we risk a loss of data fidelity – if hot topics take over the structure of the questionnaire to the extent that questions end up shifting around or hot topic questions are inserted early, we are very likely to see the effects of participant priming in action.
In essence, a successful research tracker should be a dynamic tool that adapts to the evolving needs of the business while preserving the integrity of the tracked metrics. It’s about finding that delicate balance between flexibility and consistency, ensuring that the all-important KPIs remain at the forefront while still allowing room for exploration and adaptation as circumstances demand.
If you’ve got any questions about setting up a tracker or aren’t happy with the way you’re tracking now, get in touch at hello@sparkmr.com.