Presentation of the Universal Framework of Assumptions

"What is well understood is clearly stated.
- Nicolas Boileau

At a time when every decision can influence a company's trajectory, a methodical, data-driven approach is essential. At Henkan & Partners, we have designed a Universal Hypothesis Framework that optimizes every stage of the testing process. Our unique methodology incorporates elements that are often overlooked, ensuring that each test is aligned with strategic objectives, supported by evidence and designed to generate tangible results.

This framework is designed for all those involved in digital experimentation, such as product owners, designers, analysts, researchers and many other key roles. It aims to provide a universal language across all teams, enabling better prioritization, understanding and impact of initiatives.

Why we launched this framework

We launched this framework to address the challenges of inconsistent testing methods and unclear objectives. By standardizing our approach, we ensure that every experiment is based on strategic objectives, supported by solid evidence and capable of delivering actionable insights.

The most comprehensive frame on the market

Our Hypothesis Framework sets itself apart by integrating key components that are often overlooked. It starts by aligning each hypothesis with the company's objectives or OKRs, ensuring that each test is intentional and impactful. We emphasize the importance of behavioral predictions and supporting research to make our hypotheses robust and credible. What's more, our framework includes precise metrics to measure success and a clear plan for post-test actions, making it a comprehensive tool for continuous improvement.

Model Assumptions Framework

The H&I Framework of Assumptions

Frame overview

Goal Alignment

We start by aligning each hypothesis with our corporate objectives or OKRs. This ensures that every test is directly linked to our strategic priorities, solving the problem of unfocused experimentation and scattered efforts.

Hypothesis formation

Formulating hypotheses based on specific changes helps to clarify what is being tested. This solves the problem of vague testing and ensures that proposed changes are well-defined and measurable.

Behavioral Prediction

We predict the expected behavior of users following the changes. This step is crucial to understanding potential impacts and setting clear expectations, thus avoiding surprises when analyzing results.

Supporting evidence

Our hypotheses are always supported by relevant research and data. This reinforces the credibility of the tests and helps solve the problem of decisions made on the basis of unverified intuitions.

Metric definition

We define clear metrics to measure success. This ensures that we can objectively assess the impact of changes, solving the problem of subjective interpretation of results.

Experimental Design

We choose the most appropriate testing method, whether A/B tests, multivariate tests or other experimental approaches. This flexibility enables us to adapt our method to the specific needs of each hypothesis.

Results and Actions

Finally, we describe actions based on the results of the experiment. This enables us to make informed decisions and iterate on the process, thus solving the problem of post-experimental inaction.

Examples

  • To achieve our goal of increasing conversion rates by 15% by the end of the quarter, we hypothesize that simplifying the checkout process from 4 steps to 2 will increase conversion rates. We believe this change will reduce friction at checkout, leading to more purchase completions. This is supported by studies and user feedback indicating that fewer steps increase conversions. We will measure conversion rate as a primary metric, and average order value as a secondary metric. The experiment will be conducted as a 4-week A/B test with 10,000 users per group. If the experiment is conclusive, we will implement the new payment process for all users; if not, we will analyze the data to understand the shortcomings and iterate on the process for further testing.

  • To achieve our goal of improving the user experience on our website, we hypothesize that introducing an improved search bar will increase user engagement. We believe that this change will make it easier for users to find what they're looking for, thereby increasing the time spent on the site. This is supported by studies showing that effective search tools increase engagement. We will measure the average time spent on the site and user satisfaction. The experiment will be conducted in the form of user test sessions, where participants will perform specific tasks on the site while we observe their interactions. If the experiment is conclusive, we will implement the new search bar for all users; if not, we will gather further feedback to refine the search bar and carry out further tests.

  • To achieve our goal of increasing user engagement by 20% by the end of the quarter, we hypothesize that personalizing the home page according to user preferences will increase engagement. We believe this change will make content more relevant to each user, increasing time spent on the site and interactions. This is supported by studies showing that personalization improves user engagement. We will measure the average time spent on the site and the number of interactions as key metrics. The experiment will be conducted as a 6-week A/B test with 8,000 users per group. If the experiment is conclusive, we will deploy personalization for all users; if not, we will analyze the data to refine our approach and test again.

  • To achieve our goal of increasing organic traffic by 25% over the next six months, we hypothesize that optimizing the content of main pages with relevant keywords will increase our visibility on search engines. We believe this change will improve our SEO ranking, attracting more organic visitors. This is supported by research into SEO best practice and case studies showing significant increases in traffic through keyword optimization. We will specifically target the keywords "online shopping", "fast delivery", and "quality products". We will measure organic traffic and ranking for these keywords as key metrics. The experiment will be conducted over a 6-month period, with monthly analysis of results. If the experiment is conclusive, we will apply these optimizations to other pages; if not, we will adjust our SEO strategy and test new approaches.

  • To achieve our goal of increasing user satisfaction by 30% by the end of the year, we hypothesize that adding an AI-based virtual assistant to help users navigate and find products will increase their satisfaction. We believe this feature will make navigation more intuitive and reduce the time needed to find specific products. This is supported by studies showing that virtual assistants improve the user experience. We will measure user satisfaction and average product search time as key metrics. The experiment will be conducted as an A/B test over 8 weeks, with 5,000 users per group. If the experiment is conclusive, we will deploy the virtual assistant for all users; if not, we will gather feedback to improve the AI and carry out new tests.

Alexandre Suon

Alexandre is the Managing Partner at Henkan & Partners, with a decade of experience working with digital executives from 70+ companies, including 12 Fortune 500, 4 FTSE 100, and 5 CAC 40.

Previous
Previous

Introduction to Self-Evaluation for Experimental Teams