Testing business model innovation
Article two in a two-article series about design thinking and business model innovation with new technology.
In our work, we’ve often seen, that our clients and partners felt that the result of modelling the Value Proposition and Business Model Canvas’s was a depiction of the “reality” or, better but not still quite right, a description of the “plan” behind a product or service.
It’s important, though, to realize, that what you have, is a model. It is a somewhat tangible depiction of a concept or some business ideas. How well that model maps to “what could be”, is quite possibly very uncertain.
So, how do you take the next step? How do you find the best way of going forward? Many of us are unconsciously biased towards what we think is either the easiest or most interesting. We’ve seen that many entrepreneurs with an engineering background focus intently on the technical aspects and technical feasibility of a product or service. Other approaches, and maybe a bit more successful, are the ones that we have seen from the purely business-oriented teams where aspects like financial viability and market are some of the first aspects to be explored. Lastly, the design-oriented teams will work a lot with the value proposition, testing and re-testing the usability and applicability of a solutions towards real and meaningful problems. While you can still get lucky and find all the right solutions – the risk of missing important aspects at the earliest and best possible time is big. One approach to overcome this, offered and explored in Strategyzer's new “Testing Business Ideas”, matched our own experience – and is now integrated in our own “From Idea to Business” methodology and will be described here, in this second part of the “Design Thinking and Business Model Innovation” small series.
Identifying your assumptions reveals your blind sides
What most of us brings with us, also when designing new products or services, are assumptions about how the “real world” functions. We might have great and well-founded theories – for example about physics, materials, prices, logistics and supply chains. But innovation is not about doing something that has already been done and proven. There will be assumptions and hypothesizes supporting your idea. Designers knows this – and this (as described in part one of this article series), forms the basis of design thinking; being able to identify what needs to be tested, figuring out how to test it – and then approaching the results of the tests critically and with a learning mindset. This approach has not been widely adopted in other fields, though, and especially not as a formalized and acknowledged way of modelling and testing business models. We’ve seen small steps in this direction, though. For instance, the “Lean Startup” (Eric Ries) approach contains a very similar approach with its “build, measure, learn” credo. What we gain with the approach described here, is a way to connect our models of our value proposition and business model to a prioritized list of important hypothesizes. These hypothesizes must then in turn be tested and validated. With a business concept, split in 9 parts for the business model and 6 parts for the value proposition model, you have a great way to look critically at your thinking and assumptions in all parts – also the ones not in your comfort zone.
Looking at your models, now is the time to pose the question to yourself and your team; what needs to be true, for this to work? Not only at a macro-level, but for each part in the value proposition and the business model. It’s difficult work – but important, as it gives your innovation team the best possible chances to identify potential problems at the earliest possible time. Figure 6 below illustrates how you can traverse your business model and value proposition canvas and identify assumptions.
In our experience, some of the basic assumptions that influence technology-driven innovation are assumptions like:
This can make us money – i.e. we have sufficient insights into the total cost of our operations and supply chain with this new technology.
We can build this – at scale with sufficient quality.
The added cost or complexity involved still offers a value to our users and customers.
Mind you, the above assumptions are meta-level – what the assumption-exploration process allows you, is to dive deeper and detail these in respect to your concrete concept and model. We’ll dive a bit deeper into these in the following paragraphs – and end with a couple examples from the real world.
Find a path for validation through testable hypothesis
We all tend to be biased by our existing world view. The assumptions with which we function and navigate our everyday pervades everything we do – both as individuals, but also the company culture that we are a part of. Of course, the example above with the engineering team focusing on specs, the business team on the market and the design team on the users is an over-simplification - but not too far from the everyday truth of most companies. Therefore, this step – finding your assumptions and formulating hypotheses around them – is both very difficult and very important. Hidden in your assumptions you might find both the success and the spectacular failure of your new product or services.
First thing to do, then, after your assumption mapping, is to look through each one, and formulate it as a testable, precise and discrete hypothesis.
- A testable hypothesis can be validated – i.e., either supported (validated) or refuted (invalidated) by some evidence. Do not mistake “supported” for “proven” in any scientific way, by the way. This is, for most, basically unobtainable. But you can have your assumptions either supported or refuted with varying degrees of certainty– and knowing the strength of your evidence, pointing in either direction is an important part of moving forward towards implementation (or dropping) your concept.
- A precise hypothesis has specified, in sufficient detail to be valuable, the subject, domain, amount and/or timing of the hypothesis. If details of the hypothesis are to broad, you will not get really useful or actionable data out of it.
- A discrete hypothesis doesn’t try to validate more than one thing at the time. It its concrete and directed towards a meaningful distinct aspect of your assumption. In this way, the results are less prone to discussion and interpretation. Sometimes, this means that you need to split up an assumption into several supporting hypothesis.
From our cases, here is an example:
Hopefully, you can see the difference here – and the value. By first finding your assumptions and then formulating hypotheses around them, gives you a unique chance to speed up some of the learnings you might otherwise only find after you’re introduced your product or service into the market. We all know by now, that “pivoting” is almost expected by investors and startups as something that will happen in every startup’s early life. Pivoting happens, when your assumptions about the market, your product or your users turns out to be wrong. This method gives you a chance to “pivot” cheaper, earlier, and faster.
Mapping and prioritising what is important and unknownAfter the hypothesis formulation process, the next phase of your innovation journey needs careful planning; now you need to prioritise your assumptions. What’s important, what is less important? What do you already have evidence supporting? What is completely unknown. You can map this out, by taking your assumptions and map them in a two-by-two grid with importance over one axis, evidence strength over the other. This exercise will leave you with a pretty clear idea of what needs to be investigated – and what is less important.
Be careful about this part of the process and don’t let your bias influence your judgment about whether you have evidence or not about a hypothesis. Also, let everyone in the team chip in on what assumptions are important or not. Preferably, have somebody outside your team help you and challenge you on this. While the above map is an important tool to get your priorities sorted before embarking on the next steps, it is also a convenient place to “bury” assumptions behind false estimations of either supporting evidence or importance. In the Strategyzer book “Testing Business Ideas”, you can find a lot of good tips on how to verify the strength of available evidence – and what strength you should be looking for, depending on which phase of your product development you are in. What is important might even change during your product development, so make sure to revisit the mapping exercise often.
Create a test-plan - and iterate!After your hypothesis forming and prioritization, it is time to get some answers! In what we could call “the clueless corner” of the two-by-two map of assumptions – that’s the corner with important but un-validated assumptions – we have a prioritization of the assumptions and can start building evidence for (or against) them. After 10-15 years of agile startups, “pretotyping” and design-thinking work in the technology domain, there is an abundance of methods and tools to test out ideas – both within the feasibility, viability and desirability domains. The Strategyzer book list a lot of them, categorizing them after product type, strength type and domain (feasibility, viability, or desirability domain). In IdemoLab in FORCE Technology, we have work with many of the same methodologies throughout the last 10 years. Typically, feasibility test methods revolve around different tools for prototyping, while viability tests are around pretotyping or crowdfunding (mock-selling) and desirability through different kinds user-tests. Here, it’s important to be critical of the cost-to-evidence ratio; don’t spend too much money on tests, that will give you little evidence – or a lot of evidence about something, that is not important. Also, consider different phases of your product development journey; at the start, it might be ok to focus on desirability and feasibility, then start looking at viability as soon as you have some tangible evidence that you can build something meaningful.
IoT for business and society
White paper: Metrology for IoT
Metrology and traceable calibrations in IoT ensures that data have the right quality application.
Smart cities and communities infrastructure.
Whitepaper: Printed electronics
Learn about the new technology that is emerging everywhere.