Creating great experiences is a crucial goal of most companies today, but few have a process for measuring and tracking how they're doing. SoundingBox's approach to success tracking can help you keep your team on track and demonstrate the value of what you're doing.
As more and more organizations embrace the value of design, teams need a way to keep track of how their efforts unfold over time. Consider the following scenario: you're making significant changes to business-critical designs at a cadence of roughly every quarter. You're investing in UX by having UX designers and researchers on staff. You deploy significant programming resources for built outs. You're tracking basic metrics through web-analytics of various kinds, and you're of course tracking financial metrics.
What you're likely not tracking, despite your heavy investments in customer experience, is the impact of your efforts on actual or potential customers. Enter SoundingBox success tracking.
How a SoundingBox success tracking test works
Like a usability test, a SoundingBox success tracking test defines a set of tasks and questions which participants complete. But unlike a usability test, more attention needs to be paid to upfront planning because you're going to run this test, or this set of tests, more than once, and ideally over quarters, months and years.
There are a handful of SoundingBox components that make success tracking possible:
- Question and category name normalization and re-use
- Question groups enabling measure roll-ups into trackable groups
- Data normalization in general
- The ability to see and add benchmarks at any point
- Saving studies as templates
Creating your success tracking test
Create an account if you haven't already. Create a new study and select either experience2 or basic usability test as your test type. If you're planning to add competitors or multiple internal product sites to the mix, an experience2 test is the way to go. Like other SoundingBox studies, success tracking tests are comprised of tasks (things we ask people to do) and questions (things we ask people about how they felt after doing their activity).
Working with names
Being able to track how you're doing means measuring the same things at more than one point in time. One key ingredient for measuring the same thing is to ask the same survey question after the participant has completed the same task. For example, you might want to ask the extent to which they felt the site was easy to use after completing a critical process like adding an item to a shopping cart or completing a cost calculator widget. Every survey question (and task) in SoundingBox has a name property. The only thing you need to do to compare results over time is to give the items you want to compare the same name.
Putting a little thought into your names will go a long way toward making your success tracking effort, well, successful!
Question groups is another feature you can leverage to help you track and compare things over time. Say you have a group of questions, all intended to measure someone's likelihood to convert. Wouldn't it be nice to combine these into a group so that you could track them all together and summarize them into a single number? That's where question groups come in.
Most questions can be added to a group by clicking on the question in the study creation process and selecting the group in a dropdown. You can create your groups by navigating to your settings and clicking on My Groups.
Making use of our data normalization
One of the reasons it can be hard to track things over time is that the measures you may take at one time, are either different or on a different scale than the measures you're taking now. SoundingBox controls for this by baking in certain forms of normalization. For example, all scale questions are summarized into a 100 point scale, no matter the number of stops in the scale, making them easy to track and compare over time.
Adding and making use of benchmarks
Benchmarks (other measures you can compare with) are a key ingredient to your success tracking efforts. Benchmarks come in two flavors:
- Benchmarks that are internal to your own study in the form of competitors in a competitive test
- Benchmarks that come from SoundingBox itself for a given measure
Success tracking without benchmarks can be a little weak. Comparing only to yourself is better than no comparing at all, but comparing yourself to yourself and a competitor or two—you've just increased the likelihood that you're going to learn something interesting.
Every organization is different. Start with our templates, but ultimately define your questions and tasks as they apply to your business. As things change, add new questions, or drop in a new competitor.
A note on sample size
Since success tracking tests usually involve asking people about their opinion about an experience (how they felt), having an adequate sample size can be vital to making claims about your data. You can remain agile and not break the bank with around 30 participants per group or competitor if you choose to do an experience2 test. If you want greater certainty about your results, you're welcome to choose to make your sample size more substantial, and many customers do.
Thinking aloud and success tracking tests
If you have a lot of competitors, then including them may increase the value of your success tracking test. When the number of participants grows beyond 30 or 40 total, the importance of having participant audio begins to diminish, since you won't have the time or stamina to watch and listen to every session. Opting not to capture sound also reduces the per-response cost, making it easier to get to a sample size that you can generalize about.
Comparing your study snapshots in the dashboard
Since we're talking about success tracking, chances are you'll next want to load a previous success tracking test to compare with your current test. Find the test in the study tile, and load it up along with an earlier success tracking study. With the two studies loaded, you can now click on the Comparison tab. Each success tracking study will be shown side-by-side. Did you move the needle?
Read more about our analysis dashboard.
Getting to the "why"
All of this is just a starting point. You can see where the site currently landed for each thing you've measured, but you still need to come up with some explanation about your scores so you can share the story with your colleagues. Like with other SoundingBox tests, that's where the open-ended responses and the replays come in. Often participants will give you clues about their feelings by telling you about them in the open-ended (free text) responses you've asked them to provide. You can find other clues by replaying their interactions. Did they encounter usability problems, or react adversely in verbal comments as they interacted?
You'll find replays in the Replay tab, and you'll find open-ended text responses in the Grid view. Remember, clicking on any tile or data point in the dashboard will sort replays by that measure, making it easy to prioritize which responses to watch first.
Success tracking task design strategies
With usability-style tests there are two general types of tasks that you want to consider: open tasks that ask people to explore with little prompting, or closed tasks which give people clear goals. Each has its value. For success tracking tests, sometimes open tasks can be the most revealing since, with an open task test, each site is allowed to define the experience.
Read more about open and closed tasks.