<tt id="w0no0"></tt>

<ruby id="w0no0"></ruby>

  • <rt id="w0no0"><meter id="w0no0"><acronym id="w0no0"></acronym></meter></rt>

      1. Lean Research Series Part 2: Experiment Criteria

        Last month, we shared two powerful mindset shifts we’re seeing in the research arena. This month we’re tackling questions our clients often ask us: “what makes a good experiment?” or “how do we know we’re doing this experiment-driven research right?”?

        Coming from the traditional research world where specific methods are qualified for testing specific objectives, this experiment-driven research world can feel like a bit of a “wild west” where anything goes. Early on, everything in Lean Startup revolved around developing an MVP, or minimum viable product. That was a pretty linear solve; go make a low fidelity version of the product that a consumer can interact with and insert that prototype into their life to see what they do with it. While this is a great experiment type, it’s important to note that it’s only one potential experiment type. Things felt off for us at The Garage Group pretty early on in MVP thinking when building an MVP didn’t actually give data for the riskiest assumption, and/or building an MVP meant over-building and taking too much time or money, when another experiment type could have been leaner and still high rigor.

        Here is a core set of principles that give guardrails when thinking about what sort of experiment to construct to test your riskiest assumption (or biggest risk to your business model’s desirability, viability, or feasibility – check out this post for more on identifying your riskiest assumption).

        1. In-the-Wild or Recruited? The reality of BigCo front-end-innovation is that it might not make sense right out of the gate to test your riskiest assumption in a public arena. The great news is that there are a variety of experiments that can be run in a recruited context. Ultimately, it’s about understanding the benefits and drawbacks of In-the-Wild vs Recruited research contexts and choosing the right scenario for the assumption that’s being tested and the place your team is at on their trajectory towards launch. Check out the table below for some of the ways we think about comparing the two options.

        1. Rigor, Cost, Time, & Context. Any given assumption will have a variety of ways that an experiment could be run. We’ve seen our BigCo clients’ experiments be less helpful to the ultimate decision-making process when the experiment type was too rogue for the growth board, leadership team, or other decision-making entity. We first heard the high rigor, low cost, low time idea from Strategyzer and David Bland. Giff Constable, writer of Testing with Humans, teaches teams to brainstorm at least six different potential experiment types. Then, stepping back, teams should ask themselves, “which of these is the highest rigor, lowest cost, lowest time experiments that will yield data that our organization will accept?” This is the context piece. Taking the context of what they’re willing to accept into consideration can save the team from headaches later on.?

        1. Building a Body of Evidence. As we mentioned in the first post of this series, the beautiful thing about experiment-driven research is that the pressure is not on one singular test to be the end-all, be-all decision-making tool. Instead, it’s about building a body of evidence that helps inform decision-making along the way, as the product or service is iteratively de-risked. This is another big mindset shift for traditional market researchers, because we’re used to thinking of a singular test to validate the product or service, and we may even base forecasting off of that test. A massive benefit of this “building a body of evidence” approach is that uncertainty is reduced (quickly!) and based on the iterative arc of the product or service, not the version that was “validated” several months (or years!) prior to launch.

        “You are looking for clues that help confirm or deny your assumptions….your goal is not to compile statistically significant answers. Instead you want to look for patterns that will help you make better decisions.” – Giff Constable

        We’ll be back next month with another install into this Lean Research series: examples of types of experiments that we’ve run with clients over the past several years.?

        We don’t want to leave you hanging, though! If you’re on a BigCo team and are working to implement this experiment-driven Lean Research, but are hitting hang-ups, we’re doing three free 1-hour coaching consultations with our Senior Director of Lean Research in September. Email renee@thegaragegroup.com with the high-level challenge you’re running into or goal you’d like to achieve and would like coaching time to help unlock fresh thinking.?

        Types of things we’ve coached Insights and Innovation function clients through:

        Explore More

        草莓app看片