When do user insights expire? Use the ACE method to find out
At Qualdesk, we’re building a user insight repository, and so we think a lot about what makes an insight an insight, or what makes an insight ‘valid’.
One concern is whether this validity might change over time. Is there a time in an insight’s life when it’s no longer relevant? Or what if it’s actually misleading or wrong? How do we spot when insights get to that point, and how do we decide what to do about it?
This blog post attempts to answer those questions, and sets out a framework you can use to determine when an insight has expired.
To make consistent decisions about insight validity and expiry, it’s important to establish some criteria to be shared across your team, and preferably the whole organization. With these criteria in place, you reduce the risk that insights will be ‘expired’ prematurely by one person, making them unavailable to others.
We’ve defined three criteria that we incorporate into what we call the ACE method for determining whether or not an insight has expired:
Let’s look at each of these in turn:
Think about the research subjects involved in developing the insight. This will almost certainly include customers but might include other stakeholders (for example, an advocacy group in a sector your business operates in).
Then ask the question: Are these actors still representative of the stakeholder group your organization currently needs to engage with?
Assign the insight a score from 1-5 against this attribute, with 1 indicating that they’re largely irrelevant and 5 indicating that they’re highly representative.
For example, if:
- your company now has a different target audience to the one it was addressing when the insight was originally developed
- there’s a new and important political movement with views you need to take into account, that didn’t exist previously
you should assign a lower score.
Now, turn your attention to how you uncovered the insight, and think about the context in which you did so. What questions did you ask, and/or what did you show to your research subjects to help explore their views?
Then ask the question: Is this context still relevant now? If we were carrying out the same research today, would we ask the same questions? Would we show people the same prototype or mockup?
Again, assign the insight a score from 1-5 against this attribute, with 1 indicating that the context has changed significantly and 5 indicating that it’s largely the same.
For example, if:
- you work in a tech company and the prototype you were using when you originally developed the insight bears no resemblance to your software today
- you’ve learned more through other research that means that you wouldn’t have used the same interview questions
you should assign a lower score.
Third, think about the outside world – the business or competitor landscape, the political sphere, shifts in culture and so on.
Then ask the question: What’s changed?
Again, assign the insight a score from 1-5 against this attribute, with 1 indicating that the external landscape has changed significantly and 5 indicating that it’s largely the same.
This criteria is likely to be largely time-dependent – it’s more likely that insights you developed a long time ago will score lower here – but for example, if:
- a big new competitor has entered the market and changed the dynamics of customer behaviour or altered preferences substantially
- a legislative change means that your organization now has to reconsider how it works
you should assign a lower score, even if the insight you’re looking at isn’t particularly old.
The ACE score
Add up the scores for each of these criteria and use the following scale:
0-5: Insight has expired
6-11: Insight should be treated with caution
12-15: Insight is usable
Applying the ACE score in practice
The next obvious question is when you perform this test. There are two possible approaches here:
- Reactive: when browsing or searching insights to answer a specific question, weed out those that are no longer useful as you find them
- Proactive: put in place a system to review insights on a regular basis, and to expire those which are no longer useful
Number one reflects the way that most organizations work today, and you’ve probably done this before.
For example, when starting a new project, you might look back over previous research findings to see whether you’d uncovered anything in a previous study that relates to the work you’re about to start. While doing this, you’ve probably spotted things that are obviously out of date, and perhaps also identified areas of caution – insights that need further testing to make sure they’re still valid.
But what about the ad-hoc questions that crop up outside of the research project cycle? Those inbound requests from other teams that come in the form of “What do we know about X?” They often demand fast turnaround times, where you don’t necessarily have the opportunity to review insights and check validity.
It’s in this sort of situation that a more proactive approach could help. If you’re reasonably confident that any insight in your user insight repository is still valid, pulling together content to respond to one of those requests should be a straightforward job from a validity perspective.
Regardless of which approach you choose, what’s most important is that you have a consistent way of determining insight validity across your organisation, and that’s what the ACE method can help you do.