As marketers, we know very well that mistakes are inevitable, and experiments don’t always go the way we want them to. 

But that doesn’t deter us from doing our job and it only moves us forward in our journey to be better marketers and deliver much better results for our companies.

But at the same time, sharing such knowledge about things to avoid when doing certain marketing activities can save you time, money, and resources.

So it’s really vital for marketers, especially those who work in a CRO space to be open to sharing their findings regularly.

And that’s what we’re exploring today, we’re sharing the 10 A/B testing mistakes that marketers often do.

Bear in mind that this list of A/B testing mistakes can become a reference for you whenever you’re doing experimentation in the future.

But before we jump into the nitty-gritty, let’s start with the basics of A/B testing.

How to Prepare for A/B Testing

 

  • A small introduction to A/B testing, and what should you prepare to run it.
    • Research “To uncover usability issues, conversion blockers, revenue opportunities”
      • Expert reviews 
        • The first step in your research process is to have three to four experts review your site. The idea here is to uncover usability issues that may be plaguing your website.
          The expert review process can show you low-hanging fruits that can be turned into quick wins.
      • User research 
        • 2nd step in what you should prepare before diving into A/B testing is knowing what your customers think – putting yourself in their shoes.
          In order to come up with a sound hypothesis, you need to have an idea of how your customers think – and this means conducting adequate customer research before you launch any A/B test. 
      • Data analysis
        • Another important thing you should know is how your website is performing, in terms of numbers. I’m talking about how much traffic you are getting? Which pages are getting the most traffic?  What your current conversion rate is, etc.
          Oftentimes, having this data at your fingertips helps you know the right pages you should focus on.
      • Competitor analysis
        • If you don’t know what your customers are doing then you’re flying blind. You don’t do competitor analysis to copy what your competitors are doing, but to know their strengths and weakness and to find a gap that can help you build a better customer experience. 
    • Hypothesis
      • The best A/B tests are those that you can learn from. And if you are to learn from an A/B test then you need to have a hypothesis for every test you run. 
      • A Hypothesis is a documented statement that proposes a fix to an issue noticed in the research phase which predicts the result of that fix or solution, and it must be testable. 
        Example of a hypothesis template:
        Replacing ___ with __ will increase [conversion goal] by [%], because:
      • If we use that template to make a hypothesis then it would look something like this:

        Replacing our designs in the hero section with one that has a signup button will increase our conversion rate by 15% because users who visit our website and especially the home page would notice the button and click on it.

        Who’s participating? Visitors of your site

        What Changed? The hero section design

        What effect should we notice? Our signup rates should get higher by around 15%
  • Designs 
    • Designs (in case we’re testing different designs)
      • Testing designs is important because people lose interest quickly, and if people see the same thing over and over they’ll stop reacting to it.
      • Another reason to test designs is that design trends change quickly, not just people’s focus, designs aren’t everlasting, take a look at designs from 2010 and designs now and you’ll see what I’m talking about.
      • Last but not least, try radical changes rather than doing incremental changes, radical changes in design when you completely flip the table and use a new design that doesn’t have much in common with the old design, on the other hand, incremental changes are just small changes that a designer does in a design in order to test tweaks of the same design.
    • Messaging / Copy (in case we’re testing different value propositions).
      • Copies follow a similar trend to designs, they change often and people’s focus usually shifts regularly, so it’s important to always test new and updated value propositions against current ones to see if there have been any shifts in interest.
      • Messages don’t really change that frequently but it’s always a good practice to test your message to match your current business’ mission and vision.
    • Page layouts (in case we’re testing different page layouts).
      • Page layout also follows the same principles that we have discussed in the above points.
  • A/B Test
    • Code – Run
      • It’s not a simple task and you need a developer to implement the A/B test code. I consider this to be the easiest step because it requires no involvement of the marketing team and it’s in the hands of the development team.
    • Observe stats and declare a winner
      • Probably your tool of choice (ours is FigPii) will tell you which of your variants won the test and which ones lost, you can dive deeper into the stats that this A/B test offers you to get a bigger picture but you get the point.
    • You win or you learn
      • No one wins all of their tests, actually, the industry average is around 12%-15% which means out of every 100 tests you’ll run probably, 15 will win and the rest lose.
      • But whether you win or lose, the good thing about the A/B test is that you learn more and more about your website visitors.

10 Things To Avoid When Doing A/B Tests?

 

 

  1. If you don’t have enough traffic/conversions
    • You can still run an A/B test on a site with little traffic, but bear in mind that the test will take too long to reach a confidence level. If you have less than 200 conversions per month per device type (mobile or desktop), then we would not recommend AB testing
      • 200 conversions per month will give you SLOW results
      • 500 conversions per month will give you reasonably fast results (2-4 weeks)
      • More than 1,000 conversions per month will give you fast results (1-2 weeks)
  2. Random testing without solid research “Running a test without having a research-backed hypothesis“
    • Remember AB testing can be expensive. You need analysis, design, and dev resources. An AB test might require 20 to 40 hours to implement when it is all said and done. 
  3. Wasting your time on testing elements that don’t affect your bottom-line like CTA color or small changes on a page that most visitors don’t notice, etc.
    • A lot of marketers fall into this trap, they read it on a blog of some sort to only find out that it didn’t help them in the slightest.
  4. Unfocused testing
    • Your hypothesis is about social proof. But your tests include social proof + new value pro + new images. You have a winner but you don’t know what gave you the win. 
  5. Running your A/B test for a short period of time.
    • Even if you have enough traffic/conversions, run your tests for a minimum of 1 week. Preferably 2 weeks. 
  6. Changing parameters in the middle of the test.
    • Changing variation designs
    • changing traffic allocation
    • Removing variation from a test
    • All of these will pollute your results. 
  7. Giving up on A/B testing after your test has failed.
    • The average success rate in the industry is 15%. Some companies deliver a 40% success rate. Yes, most tests will fail. Accept that. 
  8. Not showing bottom line impact
    • AB tests should help improve the business bottom line. Show the dollar impact of your work. 
  9. Focusing on the bottom line and not learning from your tests
    • Money in the bank is great but it is NOT the only reason to test. 
    • Every test should help you understand your customers/visitors better. 
  10. Not documenting your tests
    • Helps you avoid any testing duplication.
      • By documenting your tests you make sure that you have a reference that you can go back and check-in case you have any doubts about a test you’re going to run.
    • Reduces testing costs 
      • Testing isn’t free and if you can avoid doing tests that’re duplicated or even unnecessary that’d cut down on costs and help you be more efficient.

Conclusion

Now you know what should you do and what to avoid whenever you’re doing an A/B test in the near future.

Just for reference here are the 10 things you should avoid whenever you’re doing A/B testing

  1. If you don’t have enough traffic/conversions
  2. Random testing without solid research “Running a test without having a hypothesis based on research (data or user research)“
  3. Wasting your time on testing elements that don’t affect your bottom-line like CTA color or small changes on a page that most visitors don’t notice, etc.
  4. Unfocused testing
  5. Running your A/B test for a short period of time.
  6. Changing parameters in the middle of the test.
  7. Giving up on A/B testing after your test has failed.
  8. Not showing bottom line impact
  9. Focusing on the bottom line and not learning from your tests.
  10. Not documenting your tests

Each one of these points has a lot of details under it, for a quick reference use the list in the conclusion section to remind yourself of what you’ve learned.

But always remember, the devil is in the details, so always read the blog more than once and you’ll find that you’re noticing new things every time.


Author