Measurement and ROI

How to Accelerate Sales With A/B Testing

For most modern marketers, optimizing content for conversions is certainly top of mind. This is especially true for demand generation marketers where the scoring, ranking, and routing of leads relies solely on the marketer’s ability to convert website visitors into known prospects. If that marketer happens to be a mind-reader, the conversion optimization process is pretty simple. For the rest of us, there’s A/B testing.

A/B Testing: Terminology and the Basics

Before jumping into the “why” and “how” of A/B testing, let’s first ground ourselves in some basic terminology. A/B testing is a form of conversion rate optimization (CRO) that compares two versions of an online asset to determine which variant delivers the greatest performance. Other terms used to describe this methodology include “split testing” and “bucket testing,” but keep in mind the latter terms may include more than two variants (such as an A/B/C/D variant).

The goal in A/B testing is to determine whether or not a single change to a webpage could minimize or maximize the desired outcome. For example, you might set up an A/B test to measure how the color of your call-to-action button affects click-thru rates on your landing page. In the image below, this is illustrated by sending 50% of web traffic to version A (with the red button) and the other 50% of web traffic to version B (with the green button). In this example of an A/B test, the variation with the highest click-thru rate would be named the winner.

Here is a rundown of the terms and definitions you need to know:

  • Conversion: the behavior that you want your visitors to perform; what you will measure and optimize toward.

  • Conversion Rate Optimization (CRO): a method that entails using analytics and user feedback to improve conversion rates on your website.

  • A/B Testing: a form of CRO; the process of comparing two versions of a web page to see which version performs better.

  • Call-to-Action (CTA): the primary button (or other user interface element) that prompts the user to perform an action that leads to a conversion.

  • Customer Acquisition Cost (CAC): the marketing cost, plus overhead, within a period of time divided by the number of new customers acquired during that same period of time.

Why Perform A/B Testing?

Said succinctly, A/B testing maximizes your profits. No matter how data-driven your marketing strategy is, there is always room for improvement when it comes to converting prospects. When performed correctly, A/B testing removes friction from the conversion process, leading to better financial results for you. Further, A/B testing helps you convert the right groups of people, leading to a lower CAC and higher profit margins.

When you think about it, there is almost no reason not to A/B test your online assets. The profitability of your business is directly tied to your conversion rates. The higher your profit margins, the more you can afford to spend on acquiring new customers. Plus, with perfectly optimized content, you already know exactly how to acquire the right customers. So the only question that remains is how? How does one correctly perform an A/B test? And, better yet, what types of tests should one perform?

Set a Goal & Form a Hypothesis

A/B testing is extremely goal-oriented. Before launching an A/B test, you must first decide on the outcome you hope to achieve. Your goal might sound something like this:

  • Increase the number of free trial signups on my website

  • Drive additional clicks on a paid advertisement

  • Increase open rates on my welcome email

Goal-setting is (usually) the easy part. Forming a hypothesis can be a bit more tricky. You might, for example, hypothesize that a different CTA button your free trial page will increase signups. It sounds simple enough, but how can you be certain that the CTA button is the culprit of the poor conversion rate to begin with? Fortunately, this particular hypothesis is easily testable, but others may not be so straightforward. The following goals, for example, are difficult to measure through A/B testing:

  • Increase brand awareness

  • Change customer sentiment

  • Modernize my company’s branding

The key takeaway: set clear goals that are quantifiable and measurable.

What to Test and How to Test It

When publishing and advertising on LinkedIn specifically using Sponsored Updates and Direct Sponsored Content, there is some “low-hanging fruit” that you can test and optimize immediately:

The Newsfeed Text

The newsfeed text is is the snippet of content that most people read first when they see your update in the newsfeed. In the not-so-distant past, most advertisers viewed this as an opportunity to start a conversation with their audiences, but an alternative approach could lead to a big lift in performance. Consider testing:

  • The presence of a headline vs. no headline at all

  • A casual and conversational tone vs. a professional and informative tone

  • Asking a question vs. making a statement

The Image

The image you select is the primary visual element of your news feed update. Getting it right can mean the difference between engaging thousands of new customers and watching those same customers overlook your content and engage with your competitors. Consider testing:

  • The presence of text within your images vs. no text

  • Illustration vs. photography

  • The presence of a CTA button vs. no CTA button

The Headline and Link Description

The headline and link description serve as a sales pitch for your content. As LinkedIn members scroll the newsfeed, most members are in “skim” mode, looking for clues as to what each post is about. To visually stand out, consider testing:

  • Long copy vs. short copy

  • Sentence case vs. title case

  • Action-oriented vs. thought-provoking

A Word of Caution  

A/B testing is one of the most reliable methods for removing doubt from your hypotheses, but it is not without a downside. For marketers, the most uncomfortable aspect of A/B testing is operating within a margin of error. The following chart illustrates the margin of error you can expect when performing an A/B test of your own.

*This sample size applies to both versions of your creative. For example, you would need to serve 500 impressions of version A and 500 impressions of version B (for a total of 1,000 impressions) before your margin of error is 4.5 percent. Said another way, when operating within a 4.5 percent margin of error, one can expect the same result 95.5 out of 100 times.

While operating within a margin of error may feel somewhat spooky, take heart in knowing that some data is better than no data at all.

The Final Word

Getting started with A/B testing may seem overwhelming at first, but it couldn’t be more worthwhile. Best case scenario: you’ll improve your conversion rates. Worst case scenario: you’ll learn something new about your audience.

Additional Resources
Case Study: How Reducing on Landing Page Increased Revenue by 19%
Source: Visual Website Optimizer

Guide: How to A/B Test Your Email Campaigns
Source: Campaign Monitor

Guide: The World's Biggest Library of A/B & Multivariate Testing
Source: Which Test Won

To learn more about how you can test and re-test your content to optimize for best results, download The Ultimate Guide to Sponsored Updates.


 

*Image Source