Once, there was a product team made of engineers, designers & product managers who were passionate about user experience. They were redesigning a CTA button and spent countless hours debating about it.
This was the existing button :
There were two options for the redesign:
The team got split into two groups based on their support for each option. The group that supported Option A said that the word ‘Now’ gives a sense of urgency and prompts the user to act and the slight 3-D effect makes it stand out from the background and gives it good affordance.
The group that supported option B disagreed vehemently. They said that the round button with ‘click’ icon was more than sufficient to tell the user that they had to click there. Moreover with the advent of mobile devices, users are used to flat icons and know how to decipher them.
Members of Group B were more articulate and vocal in their support for the round button and eventually they won the debate. So one fine day, they changed the action button to Option B. They also noted the date – it was July 15th.
Now, of course it was a data-driven team and they met after two weeks with the plot of conversion rate.
Members of Group B were ecstatic! They looked at the plot and said: “Guys, look at the lift after 15th. Like we said, the round button works!”
Suddenly the digital marketing manager intervened: “Hey, you know what, we started a new campaign on 15th and it is converting much better than all the other campaigns till now and that could be the reason for the same. Let’s take a look at conversions excluding the new campaign”
Now it was Group A, who put on a glum face(but were brimming with joy internally) and said – “The round button actually reduced conversions. We knew it!”
The business head suddenly intervened: “Guys, hold on! We lost so many potential conversions in the last two weeks because of this experiment and that is a business loss I am not willing to accept. Let’s not repeat this”
Now the product team learned a couple of lessons here.
- Do not experiment on the entire traffic. You could end up making business losses with failed experiments.
- Do not make multiple changes at the same time.
The second point was difficult to enforce. There were lot of moving parts: marketing team changing campaigns, content acquisition team adding listings, other product teams adding new features, seasonal variations in traffic patterns etc.
They agreed on a new course of action:
- Sample a small percentage of traffic (10%) randomly every instant.
- Split the traffic into 3 groups – Control, Group A, and Group B. Again randomly.
- For the Control Group show Current Button, for Group A show Option A, for Group B show Option B.
- Measure all relevant metrics for Control, Group A and Group B
- Compare the conversions(or any other other relevant metric) of Control, Group A and Group B and take a decision if there is any statistically significant difference.
The key to this is random sampling, which ensures that even if other teams make so many changes to the product(website), all the experiment-groups will have all those changes evenly distributed.
This method is known as A/B testing and is one of the most useful tools in a variety of disciplines including product management. A/B testing is a randomised experiment that can prevent correlation-implies-causation fallacy.
However, A/B testing will not tell you why one variant is winning.
Experimenting and A/B testing adds to the development effort, but there are many tools available to make it easy.
The cost of not discovering a better option can be much more than the cost of experimenting.