Your Split A/B Tests Aren’t Working? Here Are 7 Possible Reasons

You’ve spent a lot time, money and energy setting up your A/B test system. Problem is, only about 1 out of every 8 (so you’re not in this alone!) produce significant results. Now, you can throw up your hands and admit defeat, or you could take a good hard look at how your system operates, figure out what’s going wrong, and fix it! We’ve identified 7 potential reasons for why your A/B tests aren’t making the grade. Let’s take a look at each.

Mistake #1 … Copying other peoples’ tests

Forget what everyone else is doing. Seriously. Because what works for them might not work for you. The fact that they’re getting great results from their A/B testing doesn’t mean you will. What you should be doing is considering why something worked, or didn’t. What are the variables? Does a green arrow work better than a blue button? And how do you tell? Test, test, and then test some more.

Mistake #2 … Testing too many variables at one time

When it comes to A/B testing keeping it simple is keeping it smart. Never, ever, test a bunch of elements at the same time. Why? Because you can’t pinpoint what is working and what is not. How are you to know if it’s the background picture that catches attention, or the new headline? Or vice a versa. By controlling the variables you control the ability to get truer testing results. It simply makes a lot more sense to test minor page variables (like subtle directional cues, small copy changes, etc) on high traffic campaigns for quick feedback. Then you’ll be able to apply that success to your lower traffic campaigns.

Mistake #3 … Low traffic equals low changes

It’s all about traffic. Different marketers can run a gamut of tests and still arrive at statistical significance. But beware of making simple minor changes to your low traffic campaigns. Example: A/B testing “Create Your Wardrobe,” as opposed to “Create My Wardrobe,” as a landing page CTA with low traffic might take up to a year to register statistical significance. It makes a lot more sense to test minor page elements (directional cues, small copy changes, different backgrounds, etc.) on higher traffic campaigns because it gives you rapid feedback. Once proven you can apply them to your lower traffic sites. Call it trickle down marketing!

Mistake #4 … Information doesn’t reach the level of statistical significance

Making big changes on high traffic sites is great. But, you still have to verify that you’ve squeezed out every bit of statistical evidence you can before you move on. And don’t pull the trigger on a test until you’ve reached a confidence of at least 95%. And don’t make the mistake of ending your test early just because the new modifications have initially poor numbers. In many instances losing tests have been known to roar back after a period of time. Don’t be hasty! Wait it out and give it a fair shot.

Mistake #5 … The other part of the equation

A major mistake may marketers make is to get so involved in their actual test, that they totally forget the other part of the equation – the people behind the test. This can result in some serious misreads that can doom a test just about from the beginning. So, on the one hand you can have spectacular traffic on your site, but your conversion rate is in the dumpster. Why is this? Because it fails to take into account who is browsing your site. Example? Say your target market is millennials, but your site is presented in such a way that you’re getting traffic from boomers. The traffic is great, but no one is buying. Once the boomers recognize that the message isn’t really intended for them, they leave. Millennials, on the other hand, are put off by what they see, so they leave as well. Know who you’re marketing to, then fit the message to the audience!

Mistake #6 … You’re not getting the right traffic

Even with a 95% confidence level, your A/B testing could be leading you to the wrong conclusions if you stop them too soon. You need to run those tests for a full week or more. Why? Because it will show you trends you need, to be a smart marketer. For example, do you get spikes on any particular day, and if so why? Same with browsers. Pin this down to reach the right audience, at the right time, with the right marketing.

Mistake #7 … You’re limiting the time frame of your test

Double and triple check your test findings to make certain they’re accurate. Why is this important? Because even if you’re sure, the test has racked up a 95% confidence ratio, there’s still a 5% chance it could come up with a false positive. Do not jump in, especially if you plan to use the results across a wide spectrum of your sites. Make 100% sure you leave no stone unturned, and if you have any doubt, test again. If you’re running tests to 95% significance, there’s only a 1 in 400 chance that you’ll get a false positive twice. It’s well worth the time.

Share:

Share on facebook
Facebook
Share on twitter
Twitter
Share on pinterest
Pinterest
Share on linkedin
LinkedIn

Leave a Comment

We also think you'll like...