# Analyzing Results
We aim to simplify and streamline the process of analyzing UX data so you can get to insights faster and plan your next iteration.
Analyzing the result of a user test can be a slow and tedious process if you follow the standard approach, which is to watch all of the recordings, take notes, and then try to tease out patterns. But this process can be streamlined if you’ve designed good task prompts and taken measurements by asking survey questions like our scale question. When you measure things we can show you at a glance how your test went. This lets you jump to who had the most trouble first for example. We do this through something we call data blocks.
Clicking on a data block instantly loads up participant session replays, sorted from the worst to best experience, so that you can start to understand what’s behind the quantitative scores.
Data blocks are your jumping off point to explore what people liked the most or the least.
Different types of measurements can help answer different questions. For example, you can see what types of problems people encountered most often or what people found most engaging.
# Each test type is different
Each test type has subtly different approaches to analysis, but no matter the test type, what you're going to be doing is jumping from the high-level summary data to the details of what happened, and eventually arriving at the "why"—the moment when you have an insight.
For more ideas about approaching your data for each test type, see the doc pages for each test type.
Once you’ve analyzed your results, you’ll come away with some insights about how to improve your prototype or site. After you implement those changes, you’ll want to test again to see if they led to measurable improvements. SoundingBox makes it easy to re-run your test, by letting you save any test definition as a template. With each iteration, you can learn new things about how to improve the experience on your site or prototype.
Learn more about iterating here.