Docs

Discover Your Gaps and Set Strategy

Competitive analysis doesn't have to be a dark art. With SoundingBox any team can discover opportunities for improvement or set strategic direction.

Traditionally competitive analysis is conducted by someone who goes through the competition's online offerings and makes a big spreadsheet enumerating features. While not without value, this kind of analysis can't tell you much about how actual people experience competitors' digital offerings. It won't tell you how your look-and-feel compares, or how well different approaches to content or other aspects of experience work relative to the competition. For all the focus on being a design-oriented organization, there's little in the way of tools to help you see how you're doing.

A SoundingBox experience2 competitive test makes your competitive landscape your laboratory. It gives you a framework for being able to identify how subtle aspects of experience design are working for real people in a competitive context. To put it another way: competitors' sites are out there, they're public, why not use them to figure out answers to tough questions?

  • How does our look-and-feel or brand message perform? Are competitors' better?
  • How engaging is our content compared to competitors?
  • Are there features or other interactions we should consider building out to enhance the experience that we haven't thought of?

How a SoundingBox competitive test works

Like a prototype A/B test, a SoundingBox competitive test works by dividing your participants into groups, groups that are usually composed of the same demographic mix. The system automatically allocates each group to one competitor site. Each group completes the same task or set of tasks on each competitive site. And each group answers the same set of questions after doing the task or tasks on each site.

In a competitive test, participants aren't asked to detect differences between the designs on their own. One participant only interacts with one site. The numbers tell the story instead. You can say our website is n% less engaging than the competition. You can say our content is n% more informative. Since your industry is unique, you have complete control over the metrics you choose to quantify and compare.

Getting benchmarking numbers is just the beginning. Qualitative data helps flesh out the narrative. You can go back to your team and present empirical results: our site falls short of the competition in these ways. This feature resonated with people on competitor X. Our look-and-feel needs an update.

Creating your first competitive test

Create an account if you haven't already. Create a new study and select experience2 as your test type. Experience2 tests are like other SoundingBox studies in that they are comprised of tasks (activites we ask people to do) and questions (where we ask people about how they felt after doing their activity).

Setting up your groups

Partitioning your responses into groups happens when you set up your tasks. Tasks in an experience2 test have additional properties which allow for you to enter multiple group URLs per task. On the backend, we take care of the rest. If you're asking screening questions, and choose to get an even mix of participants, the same combination of participants will interact with each task group URL, giving you the apples-to-apples comparison that you need.

A note on sample size

Since experience2 competitive tests usually involve asking people about their opinion about an experience, having an adequate sample size can be vital to making claims about your data. You can remain agile and not break the bank with around 30 participants per group or competitor. If you want greater certainty about your results, you're welcome to choose to make your sample size more substantial, and many customers do.

Thinking aloud and competitive tests

To be most meaningful we recommend including at least three competitors in your competitive test. If you have a lot of competitors, then including more will increase the value of your test. When the number of participants grows beyond 30 or 40 total, the importance of having participant audio begins to diminish, since you may not have the time or stamina to watch and listen to every session. Opting not to capture sound also reduces the per-response cost, making it easier to get to a sample size that you can generalize about.

Analyzing your results

Once your results are in, click on Analyze and load up your study by clicking on the study tile. You'll notice right off that we load each competitive site to the right of your study, and clicking on the competitor tile loads more tiles, each of which summarizes the questions you've asked. At a glance, you can see which competitor did best by looking at a tile called Overall and toggling between the competitors.

Next try clicking on the Comparison tab. Here you'll see a chart showing the same summary data for each competitor, with one competitor site for each dot. If you've iterated and have prior competitive tests you'd like to compare, load them and you'll see them in this view alongside your current study, showing you how much things have changed between iterations.

Read more about our analysis dashboard.

Getting to the "why"

All of this is just a starting point. You can see which competitor won overall, and which competitor won for each thing you've measured, but you still need to come up with some reasons why they won so you can tell the story to our colleagues. That's where the open-ended responses and the replays come in. Often participants will give you clues about their feelings by telling you about them in the open-ended (free text) responses you've asked them to provide. You can find other signals by replaying their interactions. Did they encounter usability problems, or react adversely in their open-ended responses?

You'll find replays in the Replay tab, and you'll find open-ended text responses in the Grid view. Remember clicking on any tile or data point in the dashboard will sort replays by that measure, making it easy to prioritize which responses to watch first.

Competitive test design strategies

With usability-style tests there are two general types of tasks that you want to consider: open tasks that ask people to explore with little prompting, or closed tasks which give people clear goals. Each has its value. For competitive tests, sometimes open tasks can be the most revealing since with an open task test, each site is allowed to define the experience. If a site prioritizes specific experiences over others, that will likely determine the outcome and contribute to its scores. With closed tasks, the researcher to some extent determines the outcome because you're choosing what people do. If one site happens to prioritize features that fall into the task you've defined, it will likely do better.

In a perfect world you would always start with an open task test to try to get the most "clean read" possible, and then, as needed, focus on specific closed tasks, if they are critical to your strategy.

Read more about open and closed tasks.