A/B testing is a commonly used method to increase the effectiveness of your website, email or app. The technique: Show a percentage of your visitors something different from your other visitors and see how these groups differ in behavior. But how do you set up an A/B testing process? Who and what do you need for A/B testing? Here are 5 tips to get off to a strong start!
Tip 1: Clear measurable KPIs
Have a clear overview of when you and your team are successful and whether your success is measurable. When will you and your team celebrate after a great result? Is it at a certain number of online orders, newsletter sign-ups, or online self-service actions that you have realised? Ensure that your objectives are supported by the entire team and easily measurable in your analysis tool. Does your team also focus on customer-experience metrics? Set a target on each metric: this way, you know what to do and how well.
Tip 2: Record your optimisation strategy
Optimisation can be done using many different techniques. Decide for yourself: why do I choose for A/B testing? Perhaps it is complementary to UX or questionnaire research already running. What do you want to get out of A/B testing that you don't get from other research?
Also consider what costs you want to incur with A/B testing. Only 1 in 8 A/B tests (source) are successful. It is useful to record: how much time (and therefore money) each test costs and how much a successful test yields after it has been rolled out to production.
That way you can set an ROI target on your A/B test programme. However, this can cause A/B testing ideas to be extremely cautious. Is it bad if an expensive and experimental test idea doesn't make money but does yield valuable insights? This focus differs per company.
Tip 3: Form a multidisciplinary team
To achieve maximum speed, you form a separate, full-time, optimisation team. In the ideal situation, you have access to a CRO specialist, web analyst, content specialist, marketer, UX researcher, UX designer and front-end developer.
The CRO specialist leads the optimisation process, builds A/B tests with minor changes and ensures the team's speed. Together with the UX researcher, the web analyst identifies user problems and also checks the online data quality to be able to do A/B testing. The marketer, content specialist and UX designer help set up and develop improvements. The front-end developer will build tests involving complex programming.
The combination of these specialisms ensures that user problems and opportunities for improvement are approached from multiple angles. Moreover, it is immediately clear which tests do and do not fit within the content, design and development options.
Tip 4: Get the right tools
You need at least a testing tool such as Google Optimize, Optimizely, VWO or AB Tasty to do A/B testing. These make it technically possible to build tests and assign visitors to an A or B group. Important: before you start testing, have the web analyst and web developer check whether the tool has been implemented properly!
It is also useful to link your tool with your web analytics tool: this way, you can segment how the A and B groups behave differently across the entire website.
A tool like JIRA, Trello or Teams is useful to structure your progression and collaboration. In addition, Excel is useful for saving completed tests: this way, you can immediately see how many tests you have completed and what the result was.
Tip 5: Follow a solid process
A/B testing is a tricky process: there are all kinds of pages with all kinds of test ideas and many enthusiastic stakeholders. It helps to follow set steps to generate test ideas:
- Collect data (such as web analytics, surveys, user interviews, expert reviews or customer service logs)
- Defining the problems
- Formulate and prioritise hypotheses
A useful format for performing tests is:
- Hypothesis backlog
- Final check
Visualising these steps in JIRA/Trello/Teams, and putting each test idea under a column, already brings a lot of structure.
With these tips, you can already get started with A/B testing. Of course, there are still plenty of challenges, such as at what group difference a test is successful, which statistical test to choose or how to determine where on your site you have enough visitors to conduct testing.
This is an article by Maks Keppel, Web Analyst at Elsevier
Former colleague Maks helped organisations improve their products and services through data-driven optimisation during his time at Digital Power. He is now a Web Analyst working for Elsevier Life Science to optimise digital products.
Receive data insights, use cases and behind-the-scenes peeks once a month?
Sign up for our email list and stay 'up to data':