SoundingBox studies enable measurement and comparison. The mixer brings it all together.
Some research questions are foundational. And some research questions are more complicated. We aim to let you handle the basics easily and move on to what's hard to create true product design breakthroughs.
# The Basics
We're going to out on a limb here. Finding usability problems is relatively easy. If you run a proper usability test using the right tools (SoundingBox comes to mind), usability problems will almost jump out of the screen at you. Dang! People couldn't find the button! People didn't understand our animation!
That's the beauty of the method. You put someone in front of a computer, and you ask them to try something. It's pretty clear when they can't do it. There's that sinking feeling. And then there's the rejoicing. Well at least we found that one before it's too late!
Saying usability problems are easy to detect doesn't mean they're easy to see without testing. Testing is a hedge against a basic cognitive bias, making our own mistakes hard to see.
That's why usability testing is a bedrock method for any high functioning product team. Ignore it at your peril. AI isn't going to fix it for you. Neither is A/B testing.
# Blind Spots
If products are generally getting "better" from routine user testing, why is so much tech still so f-ed up? We think it's because we have some blind spots. Here's a short—but not at all exhaustive—list of things we have a hard time seeing:
- We're not caring enough about how tech makes people feel.
- We're not able to see how a product (or experience) exists in the context of other products.
- We're not comfortable with the beautiful mess that is real life.
# Mixer Vision
If these blind spots are the problem, the mixer is at least a start at a solution. The basic idea is: if detecting usability problems is easy, and seeing other deeper things is hard, how can we build a tool that will make the hard things easier, or at least begin to help bring them in to focus a little more?
# Mixer Core Concepts
- Anything you've measured should be loadable into the same view
- It should be easy to jump from the high-level view to the detail view in a meaningful way
- Numbers wherever possible should be comparable
- Numbers should never be the end, but always lead to something deeper and more meaningful
Each bubble is an average of what you've measured, with all comparable measures side-by-side, no matter the study.
The mixer relies on some of the same core technology that data blocks do, to make meaning out of numbers over time and across measures.
# Practical Mixology
So that's the vision—and the mixer is a work in progress—but the core concepts are there. Load any two or more studies at once, and immediately see:
- Has this task that I've tested improved over time?
- Did the redesign make people feel better, or did we drop off a cliff?
- Did nothing change significantly over time?
- How does this task compare with other tasks that I've measured?
- Is our product resonating at a higher level than our competitors? If not, why?