3 Tips for Better A/B Testing

by | Nov 30, 2020 | Blog | 0 comments

A/B testing multiple creative and tactics is one of the best ways to not only optimize a digital campaign, but also bring back meaningful insights during and after the campaign. The measurable nature of digital makes it the perfect testing grounds to answer marketing questions by putting two or more options head to head, but like anything, there’s also a risk in making important decisions based on limited, incomplete or random data.

Believe me, we’ve seen some pretty bad tests over the years, so here are three areas we like to focus on in our A/B testing. 

Choosing a Creative Rotation

When it comes to creative, A/B testing is intended to determine which message is most effective, so it’s important to know there are a couple ways to structure this type of test:

  • If performance is most important, putting multiple creative in an optimized rotation will, by definition, allow them to compete for the same budget and ensure the most effective ads serve most often. The problem is that many platforms will make a decision over a relatively small sample size – sometimes the first few hundred impressions served – and continue to favour that creative for the remainder of the campaign, watering down the integrity of the creative test. 
  • If learnings are the priority, putting multiple creative in an equal rotation will build a more controlled test, as it ensures more consistent variables throughout the campaign. Each message will get an equal amount of exposure, ensuring a fairer test, but the risk in this setup is it requires more manual intervention otherwise underperforming creative could drag down campaign performance.

Using Effective Measurement

However the test is structured, the most important thing to answer is how will success be measured? The common way social and display ads are tested is using click-through-rate and it’s no surprise why. CTR is a very simple and quick calculation that has been an industry standard for decades. But if we accept that clicks on these ads don’t correlate with marketing outcomes, such as sales or awareness or ad recall, then why would we structure a test around that? If a few more people accidentally clicked one ad over another, would you want to base future marketing decisions on those results?

Whenever possible, A/B testing should align with campaign and business objectives. If that’s some kind of online transaction, then using pixels can help determine which ads performed best independent of clicks. For video and audio, completion rates can indicate which messaging resonated best with an audience, especially when they are skippable. But sometimes investing in time consuming and expensive campaign surveys are the only way to truly have confidence in the results. 

Make it Meaningful

We get it – your creative team has been staring at that ad in 400% zoom, nudging elements pixel by pixel until everything looks just right, and then someone asks “should we say ‘Call Us Today’ or ‘Call Us Now’” and – eureka! – a test is conceived. Which call to action is going to convert better? So they double their creative outputs, ship to your media team, and sit back as the results come in.

But the problem is that your audience is simply not paying that much attention. In fact, Facebook estimates they are scrolling past your ad in 1.7 seconds on mobile and 2.5 seconds on desktop – so subtleties like a copy change are not likely to be noticed.

Effective tests are built around macro rather than micro changes. Photo vs illustration. Man vs woman. Blue vs red. Exterior vs interior. These more significant changes that can be captured instantly are much more likely to influence an audience differently.

By structuring a meaningful test and using the right indicators, media insights can help inform marketing strategy and design more effective campaigns in the future. But like with anything, cutting corners risks making decisions based on irrelevant – and often random – numbers.