Split testing is a great method to find out what actually works best as a promotional and marketing strategy for your business. This A/B split method is applicable for testing anything from web copies to sales, emails to search ads.
Your marketing efforts will find the right direction, effectiveness and success with A/B testing. Once you know the elements that work for your promotions you can combine them and leverage your marketing efforts to make a huge difference.
Before running an A/B testing, it is important to figure out what needs to be tested. Whether it is on-site testing or an off-site which includes ads or a sales email.
Next, you have to list all the variables that need to be tested. Of course, the whole anatomy of the email can be tested.
Here are a few parameters that can be checked:
- Text used
- The colour of the button and the space used
- Subject line
- Body text
- Closing text
- Any offer
It’s a process, and before any final decision is made, multiple A/B tests should be carried out.
You must test the A and B option against each other, and also which one is doing better. In addition, you have to test all the options for any variation of time.
You can’t test one variable today and the other tomorrow because then it’s difficult to figure out the change in variables between today and tomorrow. Besides, you’ll have to split the traffic and then see the variations at the same time.
Before you send the final version of the email it is vital to decide what parameters you will consider as successful. It is also important to peek into your previous results to make your campaign a complete success.
You’ll have to pull all the data together and gather your numbers. You might want to increase the conversation or increase the open rate. Either way, you will have to see which version actually does the work and brings improvement.
If specific support is not available in your campaign software then it can be configured manually. All you need to do is split your email list into two separate lists. Send one version using one list and the other using another list. Once done, compare the results for both lists manually by bringing the data on a spreadsheet.
You are finished with the email campaign with two different email versions. Now it’s time to analyze the results and you’ll be looking at these factors:-
- Open rate
- Click-through rate, and
- Conversion rate once customers visit your website
You have to keep the message sent in the email consistent with the message delivered on your website. You are going to lose customers if you offer a deal in your email campaign and that is not available on your website. The same is the case if your email doesn’t appear coherent in look and feel with your website. This confuses the customers whether they have landed on the correct website or not.
It is not just the click-through but the conversions that you want to get. It is, therefore, important to track the convention rate of each email so you don’t lose on the sales. It is possible that you may get a lot of click-through with an email but it is not necessary you’ll get conversions as well. You’ll have to do more tests and get a version of the email that not only gets higher click-through but also results in conversions.
Rules of A/B testing that you need to follow
1. Knowing the What and Why of Your Testing
The simple question you should ask yourself before proceeding with your campaign is – What is it that you want to improve about your email? And the “what” finds an answer in:
This tells you the number of subscribers who open a particular email. It is obtained by dividing the number of emails opened by the number of emails sent minus the number of emails bounced—bounced means the subscriber’s email is no longer valid.
Click-Through Rate or Click Rate
This stat tells you the number of subscribers that clicked on a link in the body of your email to visit your site. It too is expressed as a percentage and is obtained by dividing the number of subscribers who clicked a link by the number of emails sent minus the number of emails bounced.
2. Concentrate on Frequently Sent Emails
Ideally, A/B testing works best for frequently sent emails like blog newsletters, new user onboarding emails, and error alerts.
Email services and clients change frequently which makes it difficult to analyze what worked once would work again or not. The challenge with frequent emails is that sample size needs to be enough to get correct inferences.
3. The List needs to be Split Randomly
It is important to split your email subscriber list randomly whether your email marketing software offers A/B testing or it’s a manual test you want to set up.
You can either do it by downloading the list as a CSV or randomly sort it using Excel. Or, simply arrange it in alphabetical order with spreadsheet software and then splitting it up from there. Fortunately, most email marketing apps are efficient enough to handle this step for you.
4. Get Ready to Make Bold Changes
In general, a large enough sample size is the real struggle with A/B testing. But there is a simple remedy to it. Larger the difference in effect between emails, smaller the sample size can be. It is, therefore, important to test highly divergent variants.
5. Test Just Two Variants
Sample size becomes a great challenge and we want to avoid all things that invite necessary participants. Just keep to two variants unless the sample size is consistently testing over 100,000 in each variant.
6. Wait for 4-5 Days
As a good rule of thumb wait for about 4-5 days to see the actual effect of your campaign. An email’s effect declines sharply, decreasing about day 4 to 5 after being sent. An email is unlikely to produce any effect if it isn’t generating any response after 5 days. It is best to wait for at least 4-5 days.
7. Check the Results for being Statistically Significant
You send A/B test to some 65,000 subscribers and the result comes out to be 5,400 opening version “A” and the other 5,500 opening version “B”. This makes B the winner! But it will be wrong to call “B” the winner. It is thus important to check the results statistically by using many of the free calculators available online.
Best Practices to follow when conducting an A/B test.
Test Early and Test Often – It is important to run the tests as early as possible when considering a new promotional technique or when launching a new product. The website needs to be optimized soon so you don’t lose on sales.
Always Test Simultaneously – It is important to test both variations at the same time to prevent distorted results based on timing.
Run Tests on New Visitors Only – To avoid inaccuracies and skew results never use existing customers to witness changes in your website.
Listen to the Results – Go by the facts that come with the empirical data rather than listening to your instincts. If there is a doubt run the test again that’s why you are running controlled test.
Allow the Test to Run for Sufficient Time – More errors can occur if the tests are cut off early. The same occurs if they are allowed to run for too long. It is best to let the results come collectively over a few days before drawing any conclusions.
Run Site-Wide Tests Where Appropriate – Make sure that tests are done on every page if a call-to-action or a headline that appears on multiple pages is being tested.
Make Sure Repeat Visitors See the Same Variation – Until the test is complete the repeat visitors should be able to see the same variations as to when on their previous visit. It is especially important if you’re testing different offers, not just different wording.
Ensure it’s a Meaningful Effect – It is important to understand whether a change affects the aspect of business you wanted. It is possible to achieve statistical significance while there is no real effect on the bottom-line. This might mean to create new landing pages or making the UX on the landing page better.
Follow the Results – Once the calculator says a “yes”, your results are statistically significant. With variants being divergent you might just learn a good deal about email marketing. It is recommended to replicate the experiment to check if it’s repeatable.
Test A/B Again – Customer behaviour is ever changing and so should your A/B testing be. If you get an increase in click-through rate then your next challenge should be to bring a spike in open rate with a new subject line or a message preview.