What I learned from user testing
Apr 16, 2015
I'm currently leading the redesign of the new website for a leading national foundation at Constructive. Along with my coworker Quinn McRorie and with help from UX researcher Thomas Wendt, I carried out qualitative testing with 14 participants in three rounds. Aside from the insights that are specific to this project, which I won't discuss, here are some general things I learned from the process.
Our instincts are sound. I found that our empathy-based instincts as UX designers were often validated by the research. Whenever we had a hunch that users would be confused and require more direction and more context, we found this to be true. People want clear titles and blurbs over their calls to action. When they see groupings they wonder why they're grouped. When an item is highlighted with respect to others, they want to know why it's different.
Our instincts could have gone father. We were surprised by how often users were stumped by terminology that we tend to use all the time. The word "featured," as in "featured highlights" and "featured articles" was a salient one: it tended to baffle people, who wondered out loud what the criteria was for things being featured.
Even specialized audiences dread the wall of text. Our testers consisted of very specialized audiences in education and the arts, and included high-ranking administrators. These testers were clamoring for bulleted takeaways and TL;DR highlights. Seems that supply of information always outpaces demand.
People have problems with Twitter sharing. The demographic for this particular site isn't big on Twitter, but even users who do use it had a complaint which I've experienced too: people that use an organizational account and a personal one find one-click Twitter sharing pointless because they often need to switch between accounts. As a result, they don't use the sharing functionality. Twitter lets me easily switch between accounts on my iPhone; why not do the same on the web app?
People aren't keen on staff recommendations. Staff-curated recommendations (for content, but could also apply to products) were perceived as not being very useful. Users did not seem to trust the ability of other humans to suggest relevant items. I've personally found this to be true as well—I often find suggestions based on other users' behavior (such as Amazon's "customers also bought...") much more relevant than staff-generated suggestions.
I have a bias towards comprehensive, navigation-based browsing. Call me OCD, but I've grown weary of site search engines in all but the largest sites, and tend not to trust them to yield the comprehensive results that I seek. Not so our testers. Most of them turn primarily to keyword searches for most search needs.