In: Computer Science
You want to test two different web page layouts to see which one performs better (this is known as an A/B test).
What is A/B testing?:
It is the method of comparing two versions of a webpage or app against each other to determine which one performs better. AB testing is essentially an experiment where two or more variants of a page are shown to users at random, and statistical analysis is used to determine which variation performs better for a given conversion goal.
Conversion:
You could be looking at how many visitors subscribe, link to you through social media, buy your products or services, or move deeper into your site.
If they perform the action you want, it’s called a conversion- they go from being a casual visitor to a more committed user.
Factors to determine which one is better (Metrics of Success):
Font, color, navigation, they all drive your pages. And one of the best ways to keep your website pleasing and flexible comes from quality metric tracking while you’re A/B testing.
There are 3 important metrics:
1. Bounce rate
2. Exit rate
3. Engagement Metrics
Bounce Rate:
The rate at which people land on your landing page and leave without further activity. The improved conversions combined with a high bounce rate means there’s more work ahead. The improvement is to be made in the landing page.
Exit rate:
This is similar to the bounce rate because it’s still measuring departing visitors. But bounce rates only measure those who never get off the landing page. An exit rate is the one at which people get off your landing page to explore your site further. It means you have gained their interest and they want to read more. But if you are noticing an unusual number leaving on a certain page, it may be turning them off somehow.
Engagement metrics:
These are simply averages. You’re looking at the average time people spend on a site and the average number of people who visit a page. If you are not seeing the averages you want, the solution is intuitive- rework the pages, relaunch the A/B test.
How to run A/B test?:
A/B Testing Process
Collect Data: Your analytics will often provide insight into where you can begin optimizing. It helps to begin with high traffic areas of your site or app, as that will allow you to gather data faster. Look for pages with low conversion rates or high drop-off rates that can be improved.
Identify Goals: Your conversion goals are the metrics that you are using to determine whether or not the variation is more successful than the original version. Goals can be anything from clicking a button or link to product purchases and e-mail signups.
Generate Hypothesis: Once you've identified a goal you can begin generating A/B testing ideas and hypotheses for why you think they will be better than the current version. Once you have a list of ideas, prioritize them in terms of expected impact and difficulty of implementation.
Create Variations: Using your A/B testing software, make the desired changes to an element of your website or mobile app experience. This might be changing the color of a button, swapping the order of elements on the page, hiding navigation elements, or something entirely custom. Many leading A/B testing tools have a visual editor that will make these changes easy. Make sure to QA your experiment to make sure it works as expected.
Run Experiment: Kick off your experiment and wait for visitors to participate! At this point, visitors to your site or app will be randomly assigned to either the control or variation of your experience. Their interaction with each experience is measured, counted, and compared to determine how each performs.
Analyze Results: Once your experiment is complete, it's time to analyze the results. Your A/B testing software will present the data from the experiment and show you the difference between how the two versions of your page performed, and whether there is a statistically significant difference.
If your variation is a winner, you can go with it. See if you can apply learnings from the experiment on other pages of your site and continue iterating on the experiment to improve your results. If your experiment generates a negative result or no result, don't fret. Use the experiment as a learning experience and generate the new hypothesis that you can test.