Following the Defining the Problem series on Functioning Form, several designers asked how they should go about reframing problems with clients. How could they shift the conversation from an analysis of specific solutions to a broader discussion that better defined the problem they were trying to solve? Perhaps the best way to illustrate such a process is with an example.
Several years ago, I was called in to help redesign the registration process for a large European e-commerce site. The company had put together two options for a new registration flow. Both were compiled by engineering and product management teams and incorporated “best practices” from competing sites. I was tasked with determining which option would work best and to address any usability issues either of the options might have.
Rather than dive into a heuristic evaluation of the two redesign options, I began researching the problem this redesign was trying to address. It is, after all, hard to assess the quality of a solution without a thorough understanding of the problem you are trying to solve.
First, I asked the development teams to pull any data they had about registration performance to date: what were the completion rates, where did people drop off, when did they call customer support, what information did they fill in, etc. The redesign process to date had progressed without this type of information.
Second, I gathered the product management team into a meeting to discuss their business goals for the redesign. We quickly identified the impact of competitor features, product marketing goals, trust and safety issues, and more. Lastly, I compiled an analysis of needs from the customer perspective: what did people want to accomplish and why, what were the benefits of registration for them?
From the site usage data, user needs, and business goals information I compiled, I put together an evaluation matrix that better defined the registration problem. Each solution listed in the matrix was evaluated against:
- Completion rates: where any of the aspects of the redesign likely to increase or decrease completion rates? Potential to decrease completion was a negative mark for the design solution. The likelihood of changes to completion rates was estimated based on live-to-site data.
- Market landscape: did each redesign option provide competitive leverage for the company? The company was principally driving toward a redesign because a close competitor had recently made well-received changes to their registration process. Negative marks were given to solutions that didn’t support features popular on competing sites.
- Product marketing: does the redesign afford an opportunity to market effectively to customers? If the solution did not give the marketing team a compelling set of benefits to sell to customers, it got a negative mark.
- User complexity: did the redesign increase complexity for users? Design solutions that made things more complicated for users (after registration completion) got negative marks.
- Trust & safety: did the redesign make users and/or the company safer? Did it increase or decrease perceived credibility? Solutions that increased the potential for fraud or spam or decreased credibility got negative marks.
This evaluation matrix quickly became the metric by which each of the proposed solutions was judged. As a result, we ended up exploring alternatives to the two original options and the company ultimately settled on one of these alternatives as it better fit our new criteria for success.
Had we not gone through the process of better defining the problem, one of the two original options may have been utilized and probably failed to meet the company’s goals.