# Split Tests

Split tests make it possible to see which version of a site, prototype or app people prefer.

Traditional user tests are great for discovering usability problems. Do people have trouble finding the download button? Do they gravitate toward a given navigation item when asked to complete a task? A basic user test will help answer these questions and help head off major problems before they reach production. This is why user testing in general is such an effective method. It's essential for avoiding huge and sometimes hard to detect design mistakes.

But what about more subtle questions like choosing between design treatment alternatives? What if you have two subtly different page layouts or navigation designs? The variations are endless. Exploring them is what design teams do. Split testing gives you a research option for these explorations.

Split testing in SoundingBox is a test architecture. That means it's a framework underlying three test types. Each test type is handy for addressing different kinds of research questions.

The three test types backed by the split test architecture are:

  • Balanced comparisons - Asks the same group of participants to compare multiple versions of the same prototype and draw their own conclusions about what they prefer and why.
  • Prototype A/B tests - Divides participants evenly into groups made of the same demographic mix and has each participant group review one version in a group of versions.
  • Competitive tests - Like a prototype A/B test, divides participants into groups, and has each group review one competitor version (usually a website).

There's one thing all split tests have in common: SoundingBox's powerful grouping feature. Grouping is the "split" in split tests, and it makes each test type possible. The grouping mechanism is slightly different for balanced comparisons compared to prototype A/B and competitive tests, which have more in common. Here's how to picture the difference.

Test types compared
Balanced comparisons ask the same set of people to make their own comparisons of multiple versions. Here groups are used to rotate versions to reduce bias. In contrast, prototype A/B tests and competitive tests divide participants evenly into groups and have each group compare one version of something. Here groups handle splitting the sites or prototypes among the participants.

With balanced comparison you're letting people make the comparison themselves, and using groups to reduce ordering bias. With competitive and prototype A/B tests you're letting the data you collect in the form of survey questions tell the story of which version they preferred.

NERD ALERT

If you're a methodology nerd, a balanced comparison test is a "within subjects" study design. Prototype A/B and competitive tests are a "between subjects" study design.

# How to choose your test type

Which test type you choose can depend on a few factors.

  • If the differences you're trying to compare are easy to see and people will be able to articulate thoughts about them then a balanced comparison might work fine.
  • If the differences in versions is subtle (think a nav label change, or a subtle difference in design treatment), then a prototype A/B test might be your go-to.
  • If you're formulating strategy before starting a redesign, then a competitive test is a great way to learn what works and what to improve upon.

Let common sense be your guide. Remember that no research project is perfect. What's bad is to delay research because the process seems too complicated. We aim to change that! Ask us anything on chat and we'll get you going in the right direction for your project.