Matrix

As A/B testing tools gain more and more traction, many of you often wonder what exactly can be tested on your site. You know that testing will help you find the best ways to convert your visitors into customers, but when you open your A/B testing tool, you find yourself asking: “What should I test today?” So, you browse around your site looking for a few potential things to improve, or maybe you read a few articles about A/B testing to glean ideas that might apply to your site. But no luck – you’re lacking inspiration.

To avoid running fruitless tests, I recommend my OOH (Objectives Opportunities Hypothesis) method. I’ve developed this method to help you identify the best tests for your site in particular – meaning the tests most likely to significantly boost your site’s performance.

To help you better understand this method, let’s take an example that will serve as a common thread throughout this article.

Let’s say you are Conversion Optimisation Manager for a website selling customisable wallpaper.

On this site, users can design wallpaper to their hearts’ desires, choosing dimensions, texture, thickness, colour, patterns, etc. … They can also request a cost estimate for custom wallpaper whose options are not included in the online designer tool.

 

1. Objectives: Cleary define them from the start

One of the major pitfalls I’ve often noticed with A/B testing is over-optimisation.

Over-optimisation is when you aim to achieve great performance results on elements that have very little influence on the company’s overall performance.

In other words, it’s like dusting the nooks and crannies of a house that’s on fire… Sure, dusting is useful – but it does not help put out the fire, which is a decidedly more pressing problem.

Over-optimisation happens when we run tests arbitrarily, without really knowing why we’re testing one element and not another.

The first step is therefore to define the test’s objective: What’s the purpose of my test? What do I want to change? What do I want to improve?

But this alone is not enough. The objective should be defined according to your company’s digital strategy.

With data from your digital analytics tool, you’ll have established a dashboard enabling you to monitor and check the performance of your digital activities. The dashboard shows different strategic and operational objectives, with KPI value(s) that tell you if these objectives have been reached. Whenever an objective has not been reached, you know there’s a need to test corrective actions.

This is how Netflix practices A/B testing, according to Todd Yellin, VP of Product Innovation at Netflix: “We only do incremental A/B testing on subjects having an impact on our business: Our company strategy guides our experimentation.”

Tying your tests back to your company’s digital strategy allows you to base your efforts on what is truly important for your company. This link to your strategy will give your tests true meaning, your work will be taken seriously by management, and you won’t fall into the trap of over-optimising.

Let’s go back to our example: The latest dashboard for the custom wallpaper website shows that the strategic objective of “increasing the number of requests for a cost estimate” is showing unsatisfactory results: The number of requests for online cost estimates is not rising as quickly as the number of orders. As a result, you decide to focus your next tests on improving this strategic objective.

obj 1

2. Opportunities: Identify the best ones

Having defined an objective will help you identify the subject of your next tests.

Now, it’s time to analyse audience data in your digital analytics tool in order to find possible causes of poor performance:

  • Navigation and pathway data shows where users might get stuck on your site, or go back and forth between two pages – these are signs of confusion,
  • Performance evolution data indicates at what moment the poor performance started, and if it’s a downward trend,
  • Comparison data with the previous year will tell you if it’s a result of seasonality,
  • Marketing source data helps you identify ineffective marketing campaigns, or any possible contradictions between what is promised in your ad, and what is offered on your site,
  • Segmentation data lets you isolate the groups or people who are most confronted with this problem,
  • E-commerce data sheds light on the product or product category which is reflecting the problem.

All these analyses and reports allow you to focus your tests on the source of the problem in question. By analysing your audience data, you can identify the best opportunities for resolving this problem.

Finally (and in the same vein of testing things that truly matter), with these analyses and reports, you can verify that a sufficiently large portion of users are affected by this problem.

Back to our example: You analyse audience data for the wallpaper site. Navigation data tells you that requests for an estimate are mostly made from the homepage, or product category homepages, but very rarely from individual product pages. But on product pages, users might be interested in requesting an estimate after they’ve started creating their wallpaper, if they haven’t found their desired options in the default list. This presents a real opportunity to grow requests for estimates from product pages. So, now you know where to run tests: on product pages.

 

3. Hypothesis: Determine which one to test first

Once you’ve identified the element to work on, the last step involves finding out why the element in question is not convincing enough for users. What’s happening? Why are users not continuing their visits? What’s stopping them? Why aren’t they clicking?

This is the trickiest step. But once again, audience data is there to guide us. Data gives us clues on what’s happening in users’ minds – their motivations, the context, their expectations…

A few examples of reports and analyses that can be useful here:

  • Page display rate (ScrollView), to identify which elements are unseen by users,
  • On-page clicks (ClickZone), to know which links users favour most,
  • Next pages viewed (after the page in question), to understand what’s happening in users’ minds,
  • Search terms entered into the internal search engine, to get an idea of user intentions and expectations,
  • Time spent on page, to uncover any problems due to confusion or incomprehension.

By exposing the possible reasons behind user behaviour, it is easier to determine which hypothesis to test, and how to best treat the page in question.

In our wallpaper website example, we’ve observed that requests for cost estimates from product pages are not high enough. What are the possible causes? Perhaps users do not realise it’s possible to request an estimate, or perhaps they think it will be too complicated, long, or expensive.

There is no “request an estimate” button on the product page. This was a conscious choice to avoid reducing the number of orders. The site manager is concerned that a “request an estimate” button near the wallpaper customisation tool might disconcert users, making them hesitate and take no action at all – neither place an order, nor request an estimate. So, you suggest to test the implementation of a “request an estimate” button on the product pages.

Going back through the different steps:

  • Observe the problem:

There are not enough requests for an estimate from product pages.

  • Possible cause:

Users don’t realise it’s possible to request an estimate, or they think it will be too complicated, long, or expensive.

  • Hypothesis to test:

Implementing a “request an estimate” button, accompanied by a few reasons why they should request one, will increase the number of estimates requested, without decreasing the number of orders placed.

  • Changes to be made to the test page:

Implement a button marked “Get a quick estimate”

Insert text near the button: “We’ll respond within 24 hours”, “Adapted to all budgets”

Typically, many of us start out with our tests at the third step, as we’re tempted to find out what’s not working and what must be tested. But by following the OOH method outlined here, we can ensure a more effective approach to A/B testing and avoid trying variations without really knowing which problems they answer, or on which hypothesis they’re built.

 

Hypothèses

 

You’ve seen it throughout this article: Relying on a digital analytics tool is a constant when taking a professional approach to A/B testing. Testing cannot be carried out without understanding and practicing data analysis – it’s essential to determining objectives, placements and content of your tests. The next time you ask yourself what you’re going to test, I invite you to try the OOH method – it will help you advance quickly in your work and be a key influencer on your company’s results.

Author

Head of Client Success – Generaleads Benoit has a master’s degree in Economics from the University of Bordeaux and 10 years of web analytics experience developed while at AT Internet. In early 2015, Benoit joined the Google AdWords specialist agency GENERALEADS as Head of Client Success. In parallel, he’s working on the start-up GetLandy, the first landing page creation tool designed for traffic managers.

Comments are closed.