Designing the User Experience Research: The What, The Why and The When

The evaluation of user experiences to form a solid understanding on the “what, how and why” our users engage with or perceive from the ins and outs of our products or services is seldom a linear process. The understanding of all these requires one to take a multifaceted journey in the world of research. That’s why we often ask ourselves (or our business owners, a.k.a. bosses ask for it!) not only the reliability or effectiveness of certain results driven from one or multiple user experience analysis, but also “how we come to analyse them”, “which assumptions, which grounds we did rely on while choosing our evaluation methods.” To answer this, an easy way might be to harness the story behind the nature and the usage of usability tests and interview-based techniques.

Measuring user behaviors demands to answer questions of what, how and why. Questions of “what” come with a prerequisite to look at the start and the end. If we claim a certain behavior to be measurable, we certainly need to talk about an end state or a goal (1). And “the how” is almost identical to filling in the blanks between the start and the end: the gaps that are articulated via performance metrics, such as task success, time on task, errors, efficiency or learnability. Performance metrics are key to usability, which is why they focus on WHAT users DO and HOW users do those. That is also why those metrics inform business-oriented decisions, such as costs or revenues, once those metrics are linked with the financial impacts.

It is key in a usability test to observe the user reaches the end goal, but also articulates it when it is reached (2). Based on either the way we want the articulation of those end-goals to happen or the design of the research plan, the research questions we initiated, or simply, some pragmatic reasons such as practicalities we want to introduce to an evaluation setup, the decision behind a usability testing technique can be shaped. The technique can be a lab-based usability test in a fully controlled environment or using an online tool to conduct a remote and rather unmoderated usability test, using paper forms, surveys or techniques such as card-sorting, tree testing or A/B testing.

When the end goals and the means to get there are concerned, we talk about the WHAT and the HOW. When we need to know beyond the WHAT and intend to collect data on WHY users do what they do, interview-based techniques act the major part. Those techniques rely on the further depth: the hedonistic qualities, user insights, the attributions, values they pursue. They seek users’ emotions, imagination or intrinsic reasonings. For ex, time on a certain task, such as, to use a messaging app to send a message quickly to your friend is often used as a measurement behind efficiency, however measuring the time to complete that task of sending an app message doesn’t answer the motivation behind why user wants to send the message, the core values users pursue when they communicate with the beloved ones via messaging. Understanding those central human values or reflecting on immersed motivations are consequently essential to create insights, particularly for products or services in their concept stage or when they need to explore a fresh look into the forms of interactions or an iteration in the features possessed.

The interview methods, which are mostly semi-structured and some of which are demonstrated in my article Evaluating User Experiences are thus important to deepen the understanding of the WHATs we know upfront by exploring the rationales of the users behind and letting them express the new aspects we want to discover, new issues we seek to address (3).

A design under the lens of a research can get a shiny lot of feedback from generative techniques like interviews, yet still can fail in a usability test (4). That’s precisely why variating the techniques, choosing the right one for the right reasons or a healthy mixture of the methods that complement each other or that overcome each other’s limitations is vital for a comprehensive and eliciting UX research.

In a usability evaluation, the test concerns an interaction with a system. The test tasks to be solved reveals users’ behaviours or their articulations tied to the use of this system they interact with. Whereas in an interview, the interviewer and the questions asked takes the bigger role because they don’t only interact with the systems and depending on the methodology pursued, they can only interact with the interviewer, which decouples the insights given as a reaction to the system and as a reaction to users’ experiences (5). Combining interviews with usability tests or surveys looks into both the insights from the system and the experience perspective. It helps to pin the questions and objective observations behind what users pursue in a given task and verify those questions and observations with the users directly by asking them to further elucidate their insights, opinions or motivations.

Whether we want to combine usability tests and interview-based techniques or to choose a certain technique or to apply them interplaying in sequence, how do we know when and what to choose? Perhaps, the actual question is how to design our research. There are some guiding principles a research design can look into. One of the most significant guiding elements of a research always resides in finding the right question, or in finding the question in the right way. The research question can sometimes concentrate on what users do, therefore observing what they do will be informative enough to get answers, which is then best to be achieved through a usability test. If the research question is about what users say, what they feel when they do a certain action or how the consequences of those actions are perceived by the users, some talking work is inevitable, which is often accomplished by an interview-based technique. In a very practical sense: if your question is better answered by observing users, usability tests can be useful. But if your question needs a reflection from the users, interview techniques can come in handy (6).

One of the ways to quest for an effective blend of techniques is to look into a timeline of a given project and to ask yourself when to conduct a certain user experience evaluation technique. Knowing that one can do research at any stage of a given project (7), a convenient way to determine what method is the best for which stage is to match it with the phases of product development lifecycle, depicted from industrial design to software design practices and lately reused in design thinking for bringing solutions in multiple contexts.

Stages in design thinking process described by Interaction Design Foundation (8)

During the inspiration stage, the emphasis is on “empathising with the user”: understanding their needs in an immersive manner, engaging with them and their experiences. There is still no concept tangibly present, or it is very vague at this stage, to say the least. Therefore, a thorough exploration is needed, which is often satisfied via a highly immersive method like a field study or is amply populated with a substantial amount of data gathered through interviews. Once the problem space gets less fuzzy, an ideation process can result in a prototype that can consist of tasks and interactions where a certain amount of usability tests can be mixed with semi-structured interviews. Once the product is out in the open and released for end-users, attaching performance metrics such as errors or efficiency and measuring them using various usability tests, ranging from traditional ones up to cognitive user tests becomes appropriate. A potential product scope extension can be achieved through a field research, diary study, contextual laddering or sentence completion technique that immensely rely on building forms of conversations with users on what they miss as a key feature and how they feel the product can better address their motivations.

Once we depict which technique we need to use in a user experience evaluation process and when we do use them, we eventually ought to know the ups and downs of each to come up with a carefully thought research design followed by south-after results and conclusions. Opting for a usability technique pursuing the “what” can give a considerably solid confidence level since experiment influenced (or lab-based) techniques can be more controlled, tasks to be achieved or measurements to be performed are pre-defined. This gives a certain anticipation on the comfort of collecting and analysing the data. When you want to go beyond the pre-determined measures, interviews pop into the scene, which are governed by ‘questions that make it possible to understand the reason behind the user’s actions and experience’ (9). That oftentimes requires a researcher to know how to design a highly efficient interview protocol and how to apply it correctly, by being in control of how to ask questions in order to collect valuable, reliable and valid insights.

Uncovering user interactions should never be deemed easy-peasy, but that doesn’t imply them as expensive or cumbersome. Depending on which fundamental questions you do want to excel in answering for your users or which stage you are in for your quest to answer those questions with your smart product, a neatly designed UX research and evaluation process is more than vital. An integral UX evaluation takes you and your businesses miles ahead thanks to the inevitably key knowledge you gain about your product, your systems, your users and the ecosystems that combine all and hints beyond today: the future.

References:

(1) and (2) Albert, Bill, Tullis, Tom. (2013). Measuring the user experience: collecting, analyzing, and presenting usability metrics, 2nd ed. (2nd). Waltham: Elsevie

(3) Wilson, C. (2014). Interview techniques for UX practitioners: A user-centered design method. Chicago (Author-Date, 15th ed.) Wilson, Chauncey https://www.sciencedirect.com/book/9780124103931/interview-techniques-for-ux-practitioners

(4) Tullis, T. (2018) Five Tips for a Successful Career as a UX Researcher https://medium.com/@tomtullis/five-tips-for-a-successful-career-as-a-ux-researcher-1d0afdfc4bea

(5) Hertzum, Morten (2016) A usability test is not an interview. https://interactions.acm.org/archive/view/march-april-2016/a-usability-test-is-not-an-interview

(6) NNGroup (2018) When to use which UX method. https://www.youtube.com/watch?v=OtUWbsvCujM&feature=emb_logo&ab_channel=NNgroup

(7) Beresh, Z. (2020) User Experience Research and Usability Testing: When and How to Test Your Product https://www.userinterviews.com/blog/user-experience-research

(8) Dam, F.R. and Siang, T.Y (2021) 5 Stages in the design thinking process https://www.interaction-design.org/literature/article/5-stages-in-the-design-thinking-process

(9) Mortensen, D.H. (2020) Pros and Cons of Conducting User Interviews. https://www.interaction-design.org/literature/article/pros-and-cons-of-conducting-user-interviews

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

CHECK2ice: — Behaviour modifying checklist based mobile app.

After a few months laying the groundwork, implementation has kicked off

Rapid benefits of modern UX research and why startups are frightened to invest in it

How to Design a ‘Modern’ Logo

Guest Blog for the Digital Impact Alliance: Evaluating ICT4D projects against the Digital…

Use HCD Ideation for New Signage Ideas

Be Relentlessly Helpful

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Oz

Oz

More from Medium

The Solution to Everything (from Navigating the Politics of UX)

Quantitative Testing in UX

How to Conduct User Research Yourself?

Usability vs User Experience: Let’s get the differences straight shall we?