Testing 1-2-3 – Email Thursday



A/B split testing has been on the radar with email marketers for a while. At its basic definition, A/B split testing involves dividing a given subscriber population into two or more groups. The marketer then assigns a baseline control factor to one group, and he or she assigns different test factors to the other groups.

I liken A/B split testing to these jellybeans. The jellybeans at their heart are exactly the same except for one primary variable – flavor. In likewise fashion, each flavor of jellybean could represent a factor to be tested or tested against.

If executed correctly, A/B split testing should yield vast amounts of information about your subscribers – preferences that could ultimately lead to stronger brand resonance and increased revenue.
If implemented incorrectly, A/B split testing could yield misleading conclusions about your customers.
In order to help you successfully execute this type of research, I want to provide three fundamental factors to consider when you are planning your A/B Split Tests.

The first fundamental is specificity.

Construct your test around a specific questions. And define your test variables against that particular question.

For example, testing to assess conversion rates when the background color of the email newsletter changes from lavender to “some” color is not specific. It’s better to test to assess conversion rates when the color of the background changes specifically from lavender to purple.

It would also be helpful to frame your tests with more specific questions, such as, “How does changing the background color from lavender to purple impact the number of people who click on an offer link?” With a question like this, you are testing specific causes and effects. When you ask specific questions, you are more likely to get specific answers.

The second fundamental is simplicity.

Test one factor at a time. Many marketers fall prey to testing multiple variables at the same time. This could lead to difficulty in attributing a given conclusion to the correct cause, much like the deadend knot illustrated above.
When one cause speaks, you want to listen to one effect.
In order to accurately conclude with one cause and one effect, test one test factor against one control factor.

The third fundamental is consistency.

Test often, but test consistently. Be sure to collect sufficient data to support or refute your hypothesis. After all, A/B Split Testing is a statistical endeavor.
Allow at least three broadcasts with the given test and control scenarios before arriving at a conclusion. With this approach, you will arrive at a conclusion derived from baseline data and minimize the possibility of a conclusion based on an anomaly or exception.