A/B testing or split testing can yield desired results but only when it is done correctly. It can save time, effort, and money in finding the best combination of elements to optimize the user experience and increase conversions.

That’s why we have listed some common mistakes to avoid when A/B testing to get the most of your tests. With little vigilance, you can avoid these pitfalls to produce a winning hypothesis from your A/b testing.

1.) Not Reaching the Required Statistical Significance

Confidence level or statistical significance establishes the degree of accuracy of your A/B testing results. It helps you find the winning combination by minimizing the margin of error. It depends on the traffic volume, the number of conversions per page, and variations.

The higher the confidence level, the higher are the odds in your favor that the results are reliable.

That’s why it is essential to allow the test to reach the required statistical significance. The standard confidence level is set between 95 to 99%.

2.) Not Running the Test for Sufficient Time

One of the biggest A/B testing mistakes is to conclude the test prematurely. It will produce unreliable results that can do more harm than good. Just like confidence level, the length of the test can also impact the results.

As mentioned before, you can estimate the test duration using the required sample calculation or by gauging the number of conversions for each page.

You can also check duration and sample size using an A/B Test Calculator like this one.

3.) Testing Too Many Variations Simultaneously

As stated earlier, A/B testing is the process of testing one element at a time. If you test multiple components at once, the A/B test cannot determine the individual impact of each element on the goals.

So, the problem here is

  • you cannot identify which element is producing the most effect.
  • There might be one or more elements that do not affect the test results at all.

So, carefully decide on the elements you want to test and then pick one at a time to get the best results. If you're going to test more than one element, you can use other tests such as multivariate testing.

4.) Running the Test Without Adequate Data and Hypothesis

Another mistake to avoid is building a weak hypothesis. The test proposition is the foundation of A/B testing principles. It points you towards the elements that need to be optimized or changed to improve the page goals like conversions, subscriptions, etc.

A weak hypothesis would lack substantial evidence and research to support itself. So, if you are testing a wrong hypothesis, you may implement changes that may negatively affect your goals.

Pay close attention while collecting the data for your hypothesis. If there are gaps in your data points, use appropriate techniques to gather more information rather than going with your instincts.

Bonus Read: Complete Guide To User Research

5.) Not Taking Season Surges or Drops Into Account

Your website variables are constantly changing depending on the time of the year, seasons, festivals, Google's search engine updates, and other factors.

Let's say you are testing a coupon code with an email collection field on your landing page around Cyber Monday week. It is more probable that the test results will show this variation is a winner.

But the question is, can you extrapolate the results for other months of the year? No, it is unwise to do that due to differences in visitor mentality, traffic volume, behavioral aspects, and other factors.

That's why it is helpful to keep these fluctuations in mind while running your A/B tests. They can produce a false positive and skew the test results.

6.) Not Identifying the Appropriate Traffic Segments

Not all visitors are the same. Every visitor belongs to a different point on your conversion funnel, such as first-time visitors, returning visitors, verified customers, prospects, etc. Then, there are different visitor types based on demographic and psychographic factors.

That's why you cannot interpret the A/b test results in terms of the total website traffic. When you break down the traffic into different categories, you can visualize how different segments react to the website's changes.

For example, after testing a new feature on your website, you observe an increase in CTA clicks. But when you segment the traffic between desktop and mobile users, you see that the conversion rates went down on the mobile site. It is possible the feature is not working correctly on mobile devices.

You can add surveys or run another test for a mobile site to verify your hypothesis with this information.

appropriate-traffic

Today, many A/B testing tools like Optimizely have inbuilt traffic segmentation features to provide one-click custom reports to analyze the A/B testing results.

7.) Extrapolating Results From One Test to Another

Another mistake that people make while A/B testing is using results from one test to fit a different hypothesis. Even if the variables for the two hypotheses are the same, the test results may reflect different outcomes.

For example, even if all the parameters are the same, like sample size, variation design, target audience, and length of the test, the test results may be opposite for two different locations due to cultural differences, visitor behavior, language barriers, etc.

It is therefore advised to run a separate test for individual hypotheses.

8.) Not Running Successive Tests or Not Iterating

Avoid the mistake of not creating a testing culture in your organization. Iteration is the inherent requirement of any experimentation.

Most often than not, your hypothesis will fail. That doesn't mean you need to stop A/B testing on your website.

  • With successive testings, you can refine your results and produce benchmarks for future testing.
  • Analyze the winning A/B testing results to find what works for the customers and implement it on your website.
  • Use the data from failed tests as groundwork for the next iteration.

That's why creating a testing culture in your organization is crucial to test the feasibility and efficacy of your ideas.

TruckersReport is a fitting example highlighting the importance of successive testing.

  • In the first test, they optimized the lead form on the landing page.
  • The second test improved the page content and messaging.
  • The third was directed at optimizing the job search page. And so on.

They increased their landing page conversion rate from 12% to 79% by running six successive tests over six months.

not-itrating

Do you want free survey software?

Qualaroo is the world’s most versatile survey tool

More Resources

The Beginner’s Guide to Conversion Rate Optimization
The Beginner’s Guide to Conversion Rate Optimization

The Beginner’s Guide to Conversion Rate Optimization (CRO) is an in-depth tutorial designed to help you convert more passive website visitors into active users that engage with your content or purchase your products.

Question Guide For Product Owners
Question Guide For Product Owners

With a 30% or higher response rate, every product owner should be asking their customers these questions.

A Comprehensive Guide To User Feedbacks
A Comprehensive Guide To User Feedbacks

Whether you are developing a new product or have been selling the same one for years, you need user feedback.