A Life Of Testing, Investigating and Learning
At Digital Workroom we’re firm believers in making business and product decisions that are backed up with data. We’re constantly striving to try new ideas and tweak and tinker with them until we’re happy. A lot of the time our ideas start from a simple hypothesis:
‘What if we add X, do we think this might help improve Y’
When it comes to the implementation of a new idea, it often costs a lot in terms of time and resources and there’s nothing scarier than putting through a change based on an untested theory. So that’s why whenever possible we turn to A/B testing before we commit things to full production.
####Which design is more popular, can you guess?
Purchase Screen (A)
Purchase Screen (B)
What is A/B testing:
In an A/B test, one version (A) serves as the control or baseline, while the other version (B) includes the specific change being tested. Users are randomly assigned to either version, and their interactions and behaviors are tracked and analyzed. Statistical methods are then used to determine whether the observed differences in performance are statistically significant or just due to chance.
Reasons for A/B testing:
Any and all ideas can be A/B tested whether it’s improving your conversion rates by finding out whether a specific change between versions leads to more sales or whether it's simply improving the usage of certain areas of your product through design changes and better sign-posting.
However, at its core A/B testing lets you double-check any and all theoretical improvements with real-world data to avoid costly situations where an idea is implemented to only then be reverted as it ends up negatively impacting performance.
Purchase Screen Optimisation Case Study:
In our case we were looking to improve the rate of users subscribing on our purchase screen, our hypothesis was that there wasn’t enough information to help inform and reassure users of their purchase decision. Therefore we set out to expand on the description of listed features in the original purchase screen while keeping design changes to a minimum.
During the initial phase of the A/B testing process, early data suggested that the new purchase screen performed marginally better.
However, as time went on we saw that in terms of the number of users triggering a purchase event, the original started to perform better than the new design.
After some investigation into other available metrics, we found that despite the new design not performing as well as the original in terms of user conversion, we found that the users who saw the new design and then went on to subscribe were overall generating more revenue, which meant that users who saw the new design when choosing between the different subscription plans were signing up to the more expensive plans.
After 20 days of testing, the results showed that the trends were consistent with the data recorded at the 12-day mark. The original purchase screen’s performance with regards to having users start a subscription was higher. However, in terms of revenue generated, it was reported that the new design would outperform the original.
Conclusion:
After analysis of all the testing data, despite the conversion rate being slightly lower on the new design, the team decided to fully roll out the new purchase screen as the data proved that the changes made had led to a meaningful improvement in commercial performance. If we had stuck with the original analysis of only measuring the volume of purchase events and the conversion rate, the final decision would have been very different.
It is important to note that conversion rate is typically the standard metric that a lot of testing is centered around, however as demonstrated in this case study it is not always the only metric we should be monitoring. In this case, conversion rate did not tell the whole story and it was only after carrying out a deeper investigation that we were able to see the true commercial impact of the newly designed purchase screen.
Looking to also make improvements to your product? What will you be testing next?