Judging Visual Design

by July 20, 2004

Evaluating an interaction design is often quite straightforward: was the participant able to accomplish his task? How long did it take and how many errors did he make? These data points provide a reliable quantitative analysis for measuring the effectiveness of an interaction design solution. However, when it comes time to evaluate the visual design, qualitative analysis tends to take over.

In a recent CHI2004 paper, the MSN User Experience Team outlined the methods they used to gather “structured user input on the visual design” of a product. These included design mark-up, a semantic design-description task, a statement rating task, a semantic desirability group card sort task, and a modified focus group discussion. Each of these methods relied on qualitative data from participants. After all, isn’t visual design highly subjective?

In The Media Equation, Byron Reeves and Clifford Nass found that “people respond socially and naturally to media even though they believe it is not reasonable to do so.” Had Reeves and Nass relied on qualitative data alone, they would have reached a very different conclusion. The vast majority of people will vehemently disagree when asked if computers have feelings. Yet The Media Equation’s quantitative studies showed “all people automatically and unconsciously responded socially to media.” Donald Norman discusses a similar concept in Emotional Design by explaining that we “react emotionally to a situation before we assess it cognitively.”

These insights seem to suggest that quantitative data may be a more meaningful metric for visual design than qualitative analysis. Judging the effectiveness of visual designs based on what participants accomplish (and how they accomplish it) could potentially allow us to evaluate the subconscious processing of visual information that shapes user behavior. Or we could continue asking “does this Web site look ‘smart’ or ‘cutting edge’ to you?”