Getting Started With A/B Testing Display Ads

A/B testing in digital marketing is by no means a new thing. Almost every discipline within digital marketing can benefit from A/B testing; email, landing pages, banner ads. Anything, where your team’s performance metric is easily measurable, can very quickly benefit from even a simple A/B test.

Properly planned A/B testing can help focus on the most suitable, relevant aspects of your marketing efforts, ultimately making them more successful and more profitable.

Planning an A/B test

Putting together a plan for your A/B tests is important because it allows you to keep track of what you’re testing and the results from those tests.

Before you start, put together a spreadsheet that details the elements you want to experiment with. In that spreadsheet, you should highlight what your “control” ad will contain and each of the variations you would like to experiment with.

Once you’ve got down what you’re going to experiment with, if you have multiple tests you plan on running, now’s a good time to prioritize those tests.

Almost 70% of all A/B tests end up with neutral or negative results.

So, by prioritizing your tests based on your core KPIs, you’re more likely to end up with positive results. Similarly, you should focus on tests that can potentially deliver big wins – typically, this means easy changes that deliver big results.

A/B testing can help your bottom line

We live in the era of data-driven decision making. On top of that, as an industry, we’re seeing a growing trend of advertisers looking for more transparency and optimized returns on their ad spend.

Properly planned and prioritized experiments allows advertisers to A/B test with a sample and then launch a wider campaign, post-optimization. By doing this, you know that consumers are seeing the ad that is most likely to trigger the desired response from them.

What can you A/B test?

When A/B testing your banner ads, there are really three main things that you can easily make changes to. These form the core components of almost every display ad out there on the web:

Call to Action

If your display ad is aimed at driving your audience to an intended goal, be it registrations, installations, purchases or whatever you’re looking for, you need a call-to-action.

Even if you’re running an awareness-focused campaign, you should still include something, perhaps “Learn more” so they can continue reading about what you’re offering on some kind of landing page.

But, whatever your goal, the CTA is something you can easily A/B test. You can, for example:

  • Change the copy in the CTA
  • A/B test the color of the CTA
  • Experiment with the size and location

Value Proposition

The value proposition is one of the most crucial elements of a display ad and is absolutely something that can (and should) be experimented with.

Here, even simple changes to the shortest of value propositions can make an impact. For example, running a control ad with “Build Ads Easily” vs “Effortlessly Build Ads”. Or running a control that has a very short value proposition against a version that goes into a little bit more depth.

You can also try comparing “practical” versus “inspirational” texts in your value proposition. Or seeing how “relaxed” texts compare to “urgent” texts.

And, while this one may be a bit too risky for some, why not chuck an emoji or two in there? It certainly won’t be suitable for every brand, but, in this era, emojis have become almost part of the modern, digital vernacular.

Visuals

There are many different approaches you can take when experimenting with the different visual elements of your display ads.

We’ve already covered changing the color and shape of the CTA, but, what about the other elements in there. You can absolutely try experimenting with different colors in the overall visual too.

But, why stop there? For example, your control could use an image featuring the product you’re trying to sell and you can experiment with using something more illustrative instead.

How long should you run an A/B test?

This is one of the most hotly debated aspects of running A/B tests, in any setting: how long do you need to run your A/B test for to be certain of the results?

In reality, this is one of those instances where size really does matter.

If you don’t have a large enough sample size it’s nearly impossible to draw accurate conclusions from your results, because you are unlikely to have statistically significant results. There are plenty of useful calculators out there, like this one from Optimizely. Granted, it’s for A/B testing landing pages, but the logic is still largely the same as testing creatives.

Also, many marketers have a habit of ending their tests too early. You should always be looking for results that demonstrate statistical significance of 95-99%.

Generally, it’s recommended to run a test for at least two weeks. Reason being, if you run the test for less than a week, if you have one day where the results are significantly higher or lower than others, it would significantly skew your results overall.

Should you A/B test more than one thing at a time?

While it is possible to test multiple things at once, often referred to as multivariate testing, I still suggest sticking to and test one area of your creative at a time.

This makes attribution of your results much simpler.

Think about it.

If you’re running an experiment that features changes to both the call-to-action and, say, the offer and you see an uplift versus your control, how do you know which of those changes brought the improvement?

If you want to test both, that’s fine, but try and keep the experiments separate.

In summary

To sum up, the main things to keep in mind are planning, precision and patience. Don’t start with wild guesses; have a hypothesis, build a plan, make sure your sample size is big enough, make sure your results are statistically significant and wait at least 2 weeks before trying to draw conclusions from your results.