Imagine you’re in charge of a product’s redesign. Your team is reviewing the latest analytics: “10,000 engaged with the order form, but 3,000 dropped off at the second stage. Any ideas?” Everyone turns to the designer, expecting them to swoop in with a solution that saves the day with an “Aha!” moment of brilliance that pulls everything out of the fire.
Does this sound familiar?
It’s never that easy. Quantitative data is telling you what is happening but not why it’s happening. Understanding why users are abandoning you at a certain point will guide you to the best solution.
“You don’t get wins from tracking data. You get wins from understanding why users are behaving a specific way and then making good hypotheses about how to change that behaviour positively”, explains UX expert Laura Klein.
Numbers give you a sense of user behaviour and the scale of the problem (e.g. “x people drop off at the second stage”). To form a valuable hypothesis and iterate your design, however, you will have to:
- Get closer to how the product is being used.
- Engage with people and talk through their motivations.
Case Study – A “Dynamic” Form
We recently worked with the call-centre of a large, Fortune 500 enterprise. The call agents talk to customers to fill in a form on their behalf. It’s a complex and dynamic form: different paths can be taken depending on customer’s answers.
Our client’s call agents talk to hundreds of customers per day. Reducing time and friction means huge savings.
When reviewing the analytics, we noticed that some users were being delayed with some question paths. These initial metrics told us where to look. We then had to dig into what was causing the user to stumble and find out why it was happening.
There are many tools used throughout the industry to observe user interactions. We, at Xwerx, often use Inspectlet. Quick and easy to implement, this web-based software allows designers to gather screen recordings and heat maps remotely. You can watch users navigate your product and see where they hesitate or bounce off the page.
After a review with Inspectlet, we understood that the delays were not caused by input or navigation errors. This led us to believe that the reasons for the time delays might be held in the conversations between the call agents and the customers.
It was time to talk with the users. The 5 Why’s are a well-known technique to dig into a problem. The test facilitator, who guides the session, starts with broad questions before breaking down any tangible answer.
We began by talking to the operators to gain insight into the conversations they were having with customers.
We found that the quality of terminology around question fields varied. Call agents had a significant number of questions to ask; some very straightforward, some that required more guidance for the customers. Novice agents expressed concern over their ability to guide customers precisely through the question paths.
We created a support script that allows call agents to guide customers with simple, comprehensible question triggers. These triggers appear if a field is in focus or hovered over, empowering operators to express clear questions during the course of a call.
After testing this solution, we found that, as users became more accustomed to the domain or were already experts, they may not need the script as a helping hand. We introduced a switch to allow operators to turn off the script.
Testing solutions ensures the quality of the product being offered up to users. Verifying solutions against previous versions can give peace of mind to designer and clients.
Collecting data is only the beginning when reviewing a product. Observing your users and talking through their motivations provide unique insights to build upon.
Whatever the methods and tools you choose, start with a wide view and focus into a problem until you get your answer.