It’s human nature to want things to go your way; dropping little hints for your birthday presents, avoiding certain topics, even companies commissioning surveys tailored to provide results they want has been well documented. But on a more basic level, we subconsciously (and sometimes consciously) will try to influence variables to provide the results we want. This could be something as simple as asking a question in a certain way or setting up a test to deliver the statistics you want.
So, when it comes to experimentation, how do you safeguard yourself against the perils of wishful thinking and hidden biases? First, it is important to remember that great teams don’t run experiments to prove they are right; they run them to answer questions. Keeping this in mind will help when creating an experiment in which to test your new feature or code.
RELATED CONTENT: Waving the flag for feature experimentation
It all starts with a handful of core principles in the design, execution and analysis of experiments that have been proven by teams that run tons of experiments, sometimes in the thousands, every month. Following these principles increases the chances of learning something truly useful, reduces the time wasted and avoids leading the team on an unproductive direction due to false signals.
It has been noted by the Harvard Business Review that between 80 and 90 percent of features shipped have negative or neutral impact on metrics they were designed to approve. The issue is, if you’re shipping these features without doing experimentation, you may not notice that they are not moving the needle. This would mean, while you feel accomplished by releasing the features, you haven’t actually accomplished anything.
Another issue to look out for when it comes to metrics is the HiPPO syndrome. HiPPO stands for Highest Paid Person’s Opinion. The acronym, which was first popularized by Avinash Kaushik, refers to the most senior person in the room imposing their opinions onto the company which can sway the decision making and their presence can stifle ideas being presented during meetings. This has a negative effect on the design and ultimately the metrics.
Right now, you may be thinking “but things like A/B testing replaces or diminishes design.” This could not be further from the truth. Good design always comes first, and design is always included. In fact, product managers have a team of designers, coders and testers that all put a lot of effort into setting up their experiment with a new design. A/B testing is an integral tool that informs us if the end-users have done what you wanted them to do based on the success of the design.
But, if you’re receiving metrics that look better than prior the feature being released, then what’s the problem? The problem is that it’s pretty easy to be fooled. False signals are a very real part of analyzing metrics and can cost a lot of money on wasting time and energy basing your results on these false signals.
The best way to avoid these traps is to remember these four things: users are the final arbiters of your design decisions, experimentation allows us to watch our users vote with their actions, it is important to know what you are testing and invest time into choosing the right metrics, and, most importantly, any well designed, implemented and analyzed experiment is a successful experiment.
The post How statistics can lead to a successful experiment appeared first on SD Times.