A few weeks ago I asked the question “What did we learn?” and discussed the importance of understanding why or why not a test becomes a winner. A lot of the time you have done the analysis, the test makes sense, but you are left with no impact in your test data or a losing experience. This is why it’s so important to track the different response points I’ve previously discussed.
This is also the point at which you decide: do we regroup and try this again or do we drop it and move on?
Let’s look at an example:
Let’s say you test adding a feature to your category pages to help customers filter their results and we don’t see any change in overall conversion rate or revenue. Let’s also say that only 10% of traffic to these pages used the filter. However, the conversion rate for the 10% that did use the filter was 8X the control.
If the filter worked, but just wasn’t prominent enough to be used frequently, you can force users to use this filter more by removing competing elements and measure the results.
Don’t Give Up!
If your analysis is solid and your data is telling you that this is beneficial to the customers who see what you are testing, don’t be afraid to make your change more prominent and try it again.