Competitive testing can be done directly or indirectly. Indirect competitive research spreads a wide net that can more accurately capture how people experience sites in the wild.
More than ever, customer expectations are shaped by their experiences on other sites. This means your competitors aren’t just other companies in your vertical anymore. People are drawing comparisons to your site from sites where they spend a lot of time.
And their expectations are set by sites along the way on their journey. If a typical journey takes place across five to 15 sites, that means there are a lot of sites that will potentially influence customers in their experience of your site.
Competitive testing isn’t easy though. A lot of organizations set up a spreadsheet to compare features to set priorities for the next design cycle. Most companies are looking at where people come from and where they go next in analytics or performance tools. Some subscribe to syndicated research for trends. But testing for competitors is still tricky.
One approach is to test competitors directly. This means that one research participant interacts with two to three sites in a session. Sites are rotated to reduce bias. In the end, participants are asked to draw their own comparisons.
Direct competitive testing can be a great approach to get informal feedback on a small number of sites. But to take it to a larger scale isn’t cost effective.
Another way to test your site in the context of more sites is indirect competitive testing. This lets you test:
- Direct competitors
- Indirect competitors
- Top sites
- Sites along the same journey
In this type of research, each participant sees one site and is given the same tasks and asked the same questions in the same order. Then each site is compared on the basis of the behavioral and attitudinal measures collected in the study.
It’s called indirect competitive testing because participants don’t draw the comparisons—you do. This type of testing reduces bias and makes it easier to compare more sites. But it’s been difficult to do this kind of research. Setting up multiple duplicate studies for several sites and then pulling the results into a spreadsheet to compare is labor intensive. Tracking over time is nearly impossible.
SoundingBox was purpose-built for just this type of research. You can select a framework (HEART, SUS, REVERB) for a study or develop your own questions. Once you select the sites, the study will launch automatically managing quotas for each site. After the data is collected, you can easily compare just two sites or the whole set, on one measure or in larger groups or categories.
Competitive testing can be challenging but the results have a big impact on strategy, helping teams to set priorities, measure success, and continue to improve.