One of my earliest customers was an education institution. They brought me in to build a culture of testing for them and see if I could somehow improve their conversion rate and help them get more students.

I was able to double their conversion rate with a single test.

How?

I go big with my tests.

Before I jump into why, let's talk about A/B testing in general.

The usual advice given around A/B testing is that you should always be testing new ideas to try and increase the conversion rate of your website.

Often, people will mention that simply changing the colour of a button can increase your conversion rate. This is a true statement. I can say for sure that you might increase your conversion rate with a tiny change.

If you constantly test small changes, the theory goes, you'll end up with constantly higher conversion rates.

That theory would be sound except for a small truth that people don't want to talk about:

Most tests fail.

The vast majority of the time, the changes you make will lower your conversion rate rather than increase it.

Google and Bing have around a 10% to 20% "success" rate. AirBnB has a ~8% success rate.

This means, if you are constantly testing, you are likely decreasing your conversion rate overall, rather than increasing it.

Back to my original point: you need to test big things. The amount of success from the test needs to be big enough that it makes up for 8 other tests that failed. The colour of the button, is not a big thing.

With my original story all the way up there: what did I test, and how did I get there?

Well, the first thing I did was look at their analytics.

This was a very long time ago, so what I am about to say may seem obvious, but back then it was not.

In the analytics, I noticed about 50% of the traffic was coming from mobile devices rather than desktop. This was strange as back then the norm was desktops.

I decided to go to the website with my phone and... it was awful. They hadn't even considered the mobile experience, and so almost nothing worked. That means half the traffic wasn't able to actually convert.

My first test? A landing page that worked on mobile. It had the same information and same (stupid, really long) form.

With almost perfect linearity, the doubling of traffic that COULD fill out the form at all, doubled the conversion rate of the page.

The process that worked:

This is a loop you need to constantly be doing. Not just with landing pages, but also with your emails, your sales calls, your paid ads, your social posts and everything else.

A test loop, and habit of constantly doing this, will almost guarantee success with everything your startup does. No more throwing time or money towards things that people say will work, instead you prove what does work and refine from there.

The problem with the colour of the button is no one talks about why. There are very few learnings from the colour. You need to be scientific about it and have learnings each time. That's the other value you get from all these tests.

Even if the conversion rate doesn't increase, you can pinpoint the WHY which can inform future tests.

Some important points

Use a tool to split traffic

Don't do that thing where you make a new design, change the home page to it, then look at conversion rate before and after. It's just not accurate.

Before and after analysis is heavily impacted by outside sources such as ad campaigns, PR, seasonality and a million other things.

You need to split the traffic 50/50 and see which side gets a better conversion rate.

I won't tell you which tool to use (I used to suggest Google Optimize but they sunset it), but just make sure it does a proper split.

Look out for flashing

One issue A/B testing tools can have is that they will show the "original" site for a split second but then replace it with the new variant that is being tested. This can be an awful experience for users. Make sure your tool doesn't do that.