Your team has narrowed down the design of your product to two options. They seem like equally good options. You know you must pick one, but both have their pros and cons. What do you do?
You do some A/B testing.
A/B testing is extremely simple. You show users either version A, which is usually the current product or a control of some kind, or version B, which is the proposed change of your product, at random and then let them give feedback on it. This is great for settling which design is more popular with users since “it measures the actual behavior of your customers under real-world conditions. You can confidently conclude that if version B sells more than version A, then version B is the design you should show all users in the future.” Plus, it’s as cheap and simple as implementing code on your website to show both designs at random to different users (Nielsen).
However, this simplicity comes with a price. “In a true A/B test, only one variable is tested at a time. While the definition of an A/B test has evolved since then to commonly incorporate the testing of 2–4 variables, it’s still a small-scale test,” (Patel). Also, A/B testing can only be used on designs that have been completely implemented, so it is limited in how many ideas you can test at a time.
Nielsen, Jakob. “Putting A/B Testing in Its Place.” Nielsen Norman Group, 15 Aug. 2005, www.nngroup.com/articles/putting-ab-testing-in-its-place/.
Patel, Kristen. “A/B Testing Isn't Dead-It's Limited-and Here's Why.” SmartBug Media, 18 Apr. 2018, www.smartbugmedia.com/blog/a/b-testing-isnt-dead-its-limited-and-heres-why.