Design thinking #5: evaluation. Getting innovation roughly right before taking on unnecessary expense.
In our mini-series about the design thinking process, we’ve debunked a couple of myths and stretched a long-established method. While there’s no doubt the system works, we always think it’s worth calling out some details.
The evaluation of ideas is as important as every other stage, because it crosses the line with selection and testing. Important, too, because it’s now possible to select concepts, with a structure we use every day, making initial selection more precise than ever before.
Here are the criteria:
- Commercial factors: will it sell?
- Regulatory control: are you allowed to sell it?
- Technical feasibility: is it possible – and if not, what’s the solution?
- Timing: when will the market be ready for it?
However, boardroom and peer group opinion is only part of the answer. What we must also do is remove all emotional and functional bias from the decision and ask our consumers what they think. Established and practised market research (MR) companies can do this in their sleep. But MR can adopt a formulaic approach, often influenced by budget. Ask a market researcher to do a basic product test and that’s what you’ll get. Ask them to get creative and the price goes up. It’s not surprising – as consultants, their expertise is valuable.
Reaching the point of selection by applying a prescribed set of criteria based on insight gets you closer to a viable solution before committing to the cost of external testing. For smaller enterprises, this is both important and prudent. For larger concerns, used to commissioning support, it’s not necessary to keep doing the same thing. In fact, we recommend you don’t.
In our e-book, ‘See your data differently’ (you can download it here), we discuss how to release value from the investment you’ve already made in market research. Looking back at consumer reaction to earlier ideas may sound counter-intuitive, but it’s about as low-cost as you can get.
Testing or evaluating concepts are not single acts either. At every step, refer back to the data – test, test and test again. And then, when you’ve developed the idea a bit more, do it over again. Whether you call this structured selection or evaluation and testing, it’s all part of getting to a place that’s better than roughly right, and decidedly better than totally wrong.