SoundingBox will end operations on August 1, 2021. Read the announcement. [x]

# Study Design

We're aiming to simplify the process of creating excellent research wherever you are in your design process, from strategy to implementation.

There is no one right way to do research. We're working to simplify the process by reducing everything to the essentials, without sacrificing the power to let you answer tough questions.

TIP

SoundingBox templates are a great way to jump-start your study design process.

# Define a Goal

The first step is to determine what you want to get out of your research, write it down, and get some buy-in from team members. To determine your goal first ask, am I doing evaluative or generative research?

  • Evaluative research is what you do when you have something to evaluate, be it a website, prototype or app prototype.
  • Generative research is what you do, usually before you have begun to build, to determine your strategic direction.

For evaluative research, discovering usability problems is a common goal, but you can learn many other things too: like what features are most engaging, assess the appeal of visual design alternatives, or benchmark the extent to which users identify with a brand. Here are some examples of common research questions questions, for both generative and evaluative research.

Some sample generative research goals:

  • What are the motivations, needs, and pain points for people when they think about topic X?
  • What are some of the things people worry about when purchasing a new car?
  • What features and content on competitors' websites resonates most with people?
  • What questions to people have when the shop online for a product or service like ours?
  • What steps to people go through when selecting a product like ours?
  • What are common questions people have when applying for a mortgage?

Some sample evaluative research goals:

  • Do prospects understand what we offer?
  • Are there any problems with our prototype of the new checkout process?
  • Can people see how to get started doing X?
  • Can people find the helpful content we have on the site?
  • Is the content we have engaging?
  • Are our competitors doing something better than we are which we can learn from?
  • Is this proposed solution to a prior problem we’ve found better?
  • We have 3 visual design mockups, which one is the most appealing? Why?

# Define a Scope

You're likely building something large and complicated with many moving parts. It's tempting to test everything. While we applaud the ambition, it's better to limit your scope. There are a few ways to do this based on the kind of research you do.

# Generative Research

Generative research is all about generating new ideas for things you're planning to build or perhaps for things you don't even know you need to build yet. It's the cornerstone of innovation and inherently open-ended by design. But you will still benefit by defining your scope in a meaningful way. Like evaluative research, be mindful of what participants can handle, and don't try to cram everything into your study. People are busy and will be averse to spending more than 10 or 15 minutes on your study. If what you want to learn would likely require more time, break your study into multiple studies, each with its own well-defined scope.

# Evaluative Research

  • Use existing flows. As part of your design process you're likely coming up with user flows—a series of UI states or steps that people will need to work through. A flow can roughly map to a SoundingBox task.
  • Think about what people can handle. People will spend roughly 10 or 15 minutes on your study. Try to make whatever you want to test (your scope) fit into that window. If you can't, then break your study up into multiple studies.

SoundingBox studies consist of a handful of tasks and questions. Make sure that all of your tasks and questions don't exceed a reasonable scope. Don't forget, now that you're a master researcher, you're comfortable running multiple tests to answer different questions. Each can have a scope. Defining a good scope also helps simplify analyzing and communicating results.

# Getting Started

To get started creating your study simply create an account if you haven't already. Our templates provide quick-starts depending on your research goals. There are sets of templates for both generative strategic research and design evaluation. You also have the option of starting from scratch.

Designing a Study in SoundingBox
Get creative while leveraging best practices with our simple yet powerful study creation process.

# Estimating Your Cost

Our pricing is very straightforward. You pay for two things.

  • A monthly subscription that covers data storage and support
  • A per-participant charge that is provided when you create your study

We compute the per-participant charge in the following way:

  • A $5 base cost
  • $5 per screening question that you ask if you choose to screen participants
  • $5 per task you ask participants to do
  • An additional $5 if you choose to do a think-aloud study

THE GIST

The gist of our pricing model: studies that require less targeting and work should cost less than studies with more targeting and more work.

# Turnaround Time

Sometimes teams try to put as much stuff as they can think of in a single study. Don't do it! Technically we allow up to 10 tasks in a study, but that doesn't mean you should take advantage of this. Most studies benefit from brevity and a constrained focus. The more narrowly you define your research goal, the more quickly your research will complete. You will answer your research question, and you can move on to the next project, usually in a matter of hours or days for larger studies.

# Screening

SoundingBox can provide participants who fit the profile of typical users of your website or product. The process for doing this is what we call screening. You can also use your participants, which is rarely necessary, but could be, if you are looking for a very rare or niche profile.

# How Many People to Test?

Your sample size (how many participants to test) depends on your goals. For example, if you want to compare multiple prototypes or sites in a split test, you may want to test at least thirty people per group to have decent statistical confidence. If your goal is to discover the most significant usability problems on just one site or prototype, a smaller sample would likely suffice. Practical considerations, such as budget, are what often determine sample size in practice. We've worked hard to reduce participant screening costs for SoundingBox so that it’s reasonable to run studies with larger sample sizes.

# Tasks

Tasks are the activities you ask people to do and are essential to any test. Generative research task prompts often involve asking people to talk through something using either their web cam or the camera on their phone. Evaluative research task prompts can involve presenting either a website or prototype in the form of a URL. They can take two forms: goal-oriented and open-ended.

TIP

Each prototyping platform has a slightly different mechanism for making it possible to share your prototype with the outside world. Here's a tutorial on working with four of the most popular.

# Goal-Oriented Tasks

Goal-oriented tasks are when you prompt the participant with a specific goal that you want them to do. For example, if you were testing an e-commerce website, you might ask them to find a garment under $100 and add it to their cart. This approach is excellent for assessing usability or if you have specific processes that you want to test like completing an application, or you want to make sure that people can navigate to essential processes on your site like how to register.

For generative research, an example of a goal-oriented task would be to have people show you the breakfast cereal they have recently purchased and tell you a about why they chose to buy it, or show you their car's dashboard and talk through the things they like and don't like about it.

# Open-Ended Tasks

Open-ended tasks are when you prompt the participant to do something but without a specific end-goal in mind. A basic evaluative research example is to ask participants to explore a website and try things that interest them. You may give them a little direction, but still keeping it open-ended. For example, you could have people explore but ask them to keep it related to a specific topic. Open-ended approaches are useful for getting a sense of what’s engaging or interesting to people. It can also be a good way to start a test and let people get more comfortable with a site or prototype before asking them to try some goal-oriented tasks.

# Don't Lead Participants

A good task prompt is one that doesn't lead the participant. Not leading means you're not putting words into people's mouths. Let them tell you what they think. Don't tell them what they should think. It's a baseline research skill. This is also one of the most powerful things about open-ended tasks: you're willing to let the chips fall where they may, and in so doing, you're likely to discover something new.

Goal-oriented tasks should be based on a similar idea. Even though you're giving someone a goal ("find something that appeals to you and buy it") don't lead participants by telling them how to do it ("click the add-to-cart button"). This way you're letting the UI do its work all by itself.

# Post-Task Questions

Asking a participant questions after their task is another study building block. Questions, like scales, let you measure things. It's worth thinking about why you'd want to measure things at all. It’s true that often the insights from research come from simply observing what people do and listening to what they say. But measuring is valuable for a few reasons.

# Accelerate Analysis

It can be quite time-consuming to watch a lot of research recordings. Measurements give you a more objective way to sift through them. For an evaluative study, you might choose to start by watching the sessions where people gave the poorest ratings to the prototype and try to find out why. Or you might look at sessions of who gave it the best scores to see what’s working well. Data blocks are the SoundingBox feature that makes this possible.

# Persuade with Numbers

If you’re presenting the findings of a test, having some measurements to support your claims can be more compelling than just qualitative data alone. Even if you’re working with a small sample size, measuring still provides a more objective a way of evaluating how an experience performed.

# Compare and Track Performance

Let’s say you run a user test on a prototype, glean insights from the qualitative data, re-design the prototype and then test it again. How do you determine if the design improved? Without taking measurements in both tests, it can be hard to say objectively if a design improved.

# What to Measure

The type of goal you’ve chosen can help determine what to measure. For example, if you want to get a read on the usability of your site, you will want to ask questions that measure ease of use, how successful they felt attempting the tasks, and what kinds of problems they encountered. If you’re unsure about how to ask these questions, SoundingBox provides both templates and pre-made questions designed to measure different aspects of experience.

# Combine Qualitative and Quantitative

When choosing what to measure, it’s often a good idea to ask a mix of quantitative and qualitative questions. Quantitative questions (like scales) can tell you how well your design performs. Qualitative questions (like open-ended text questions) help you understand why. The SoundingBox dashboard helps connect your quantitative data with the qualitative through data blocks.

# What to Test

When you don't have something to show participants, chances are you want a camera task, where you have participants record a short video of themselves talking through a topic or talk through aspects of their environment. There are three types of things you can test when you have something to show participants:

  • Live websites - Websites that are currently built out. Can include any publicly available URL, including competitors. There's no need to install any software on your server. On desktop participants only need their web browser.
  • Website prototypes - A prototype version of a website. Can be static or interactive, built using any prototyping platform (InVision, Figma, Axure, etc.).
  • Mobile app prototypes - Prototypes of apps that will run on a touch device such as Apple iOS or Android, usually created using InVision or another type of prototyping tool.

Learn more about the kinds of things you can research by exploring our templates templates.

# Go Deeper

If you haven't already, check out some of our other how-tos. They provide additional details on how you can create a studies to address different research goals.

Comparative and competitive tests are based on the split test study architecture.