Kate Chadha is a research practice lead at World Wide Technology Asynchrony Labs in St. Louis. In this blog post, part two of the series, Kate discusses a few lessons learned and proven techniques to consider for developing a successful UX project.
In part one, we learned more about Kate and why she chose to become a UX practitioner. That interview can be found here.
How do you form and test hypotheses about end-users?
User experience research can be divided into two basic types: discovery or field research (before you design a product or service), and then testing and validation of the experience that was designed (a.k.a.; usability testing or evaluation).
When it comes to field research, you don’t always start with a hypothesis about the end-users. You start with a context and users you want to better understand and document. Hypotheses come more into play in experimental design of usability tests. Walking through a product or service design (value proposition, tasks it performs, interaction patterns) reveals the assumed user goals and tasks.
The user objectives – their jobs that the product or service is used to complete – become the primary hypotheses that the usability lab will test.
In addition, if there is an interaction pattern or design element identified before labs as a potential usability issue, that can also be a hypothesis until you have lab data that supports or debunks it.
What factors do you analyze to determine usability? How do you measure usability success?
The key metric to determine how easy a product or service is to use is task effectiveness. Without any training or assistance, how many users can effectively achieve their goal or task using the product or service? The higher the task effectiveness rate, the more successful the design.
The next measure of usability is time on task, or how efficiently a user can complete that task.
Finally, we count the number, frequency, and severity of usability issues each user discovers in using the product or service while completing the tasks set before them. This measure is critical in helping determine the importance or priority of solving it in production.
Usability testing gives you the data to support those decisions so you are always prioritizing those stories in the backlog that will have the greatest positive impact on the quality of the software.
What would you say is your biggest user/customer experience design success story?
A few years ago, the company I worked for wanted to pilot a new wearable technology to enhance sports and entertainment events.
My team and I designed the end-to-end product experience. We conducted two rounds of usability testing to test how well people could use the device throughout a stadium for various tasks.
A critical part of the user experience was when a fan was handed the package with the device as they entered the stadium on game day. The agency that designed the packaging created a box that relied heavily a specific football metaphor and imagery. It was cute and clever, but did not clearly indicate what was inside.
In the first round of usability testing, nearly all the participants refused the box when someone tried to hand it to them, or threw it away in the closest garbage can. They all thought it was a gimmick.
Instead, it contained a $25 gift card to buy food, alcohol, or other purchases, and they could use it anywhere in the stadium during the game.
When we asked them to read the box, out loud, there was that “a-ha” moment when they realized they had just thrown away $25. This allowed us to explore what the users needed to notice and understand immediately what they had received and open the box and use the device.
We worked to redesign the packaging based on the questions we heard and feedback recorded. In the second round of usability, every participant understood immediately what they had been given and we were able to prove that clear copy and illustrations were more effective that clever metaphors.
When you interview users/customers do you have any go to questions you like to ask?
The most common thing I find myself saying is “tell me more…” Often, when someone is talking about a task they are completing, they will simplify what they are doing and use unclear or ambiguous terms (abbreviations, generic descriptors).
When I hear generic descriptions like “great,” “good,” “hard,” “easy,” “intuitive,” I will ask what that means at that moment. Each of those can represent a broad range of things. What makes it good, great, hard, easy, or intuitive? How is the user defining that word?
What is the most unexpected customer insight you have ever gotten from field research or usability testing? How did you use this insight to improve the user/customer experience?
A few years ago, I was working on a service to be used in delivering humanitarian aid. Humanitarian crises involve vulnerable populations (refugees from natural disasters or conflict) sometimes where literacy is very low.
Digital technologies rely on our ability to read and understand words and numbers. We needed a solution that was easy-to-use, and provided a level of security against theft and fraud. So, we needed to effectively implement digital security for aid beneficiaries – even who were illiterate.
The team debated multiple solutions. At that time, fingerprint or retina scanners were still too unpredictable in overly hot, cold, dry, or wet conditions, like refugee camps.
We designed alternative PIN entry methods and conducted a round of usability testing with a relief organization working on reconstruction with a population of beneficiaries in an area of the Caribbean that had suffered a natural disaster a few years ago.
We observed, many times, that older participants would ask a nephew or grandchild to dial most phone numbers for them (on a cell phone) and would, likely, bring them to the store or depot where they would receive their program benefits. Until a simpler, biometric security method could work reliably, we needed to ensure that the instructions and any printed materials were easy enough for a small child to read and understand.
About Kate Chadha
Kate Chadha is a research practice lead at WWT Asynchrony Labs, where she works on the User Experience team in the company’s St. Louis headquarters. For more information, visit www.asynchrony.com or email at Kate.Chadha@asynchrony.com.