UNDERSTANDING USER EXPERIENCES

Oz
5 min readFeb 4, 2021

Although user experience is a relevantly recent term coined by Don Norman almost only 20 years ago with an attempt to cover its broad spectrum, it is not surprising to see that it captures the good old essence of what interactions are all about. As Bill Moggridge stated in his book Designing Interactions, the design of interactive systems inherently pursues the usefulness, while it certainly needs to embrace a deep comprehension of “what it means to us”: you, me, others, contexts and the environment through which we are all connected.

Usefulness of a design artefact is a concept majorly carried by a task-oriented nature, where one can ask questions about the effectiveness, efficiency or satisfaction derived while interacting with that artefact. However, user experience can be driven from a range of reported sensual, emotional, compositional or spatio-temporal aspects of experiences as well. These can be governed by stimuli, feelings or perceptions. Therefore, it is worth mentioning that zooming into several aspects or clusters of, for instance desirability or undesirability of user experiences can go well with cross-checking or interpreting those aspects with the usability goals such as effectiveness, efficiency, satisfaction, learnability, memorability and vice-versa. A comprehensive evaluation of a design looks at both usability and user experience, because those two concepts are almost naturally intertwined; with user experience considered broader and far-reaching — in a sense that it also contains or the least, intersects heavily with usability. To explain the inter-tangled nature of usability and user experience, let’s take an example of a user booking an airline ticket online: the user goes to an airline website, search for a destination, choose for a ticket and perhaps an ancillary offer to boost his/her comfort, enter his/her personal details, read the terms and conditions, pay the total quote and get the itinerary by hand. If a user is able to go through those predefined steps and complete the sequential flow successfully, that sequential flow represents a series of tasks to reach an end-goal, which is to buy an airline ticket. Now, if we replay the same user flow where we see the user is not able to find the available flight on the date he/she wants to travel or can’t read the fare conditions because they are too small for him/her in font size, gets intimidated by the pop-up windows pushing some discounts or experiences a lapse in time while waiting to get his payment confirmed, it will be neither sufficient nor reliable to just conclude that the goals have been reached. Tasks may have been complete, but goals behind those tasks “perceived” by a user might not be, even though the airline ticket has been issued.

Metrics to evaluate a user experience with its pragmatic and hedonistic angles is key to inform a design process. Pragmatic features come from the usability of a design while its hedonistic qualities are bred by the entire user experience. Thus, metrics should contain those two angles to form a holistic evaluation. Evaluation with metrics attached to a competent evaluation protocol is essential to get informed about the value of a given future or current product or service while understanding the user needs, their perception or stimuli in a given set of conditions that can be formulated by some aspects such as time or environmental conditions. User experience metrics can be different from other sorts of metrics definition in a sense that it aims to measure a many-to-many interaction between a beneficiary, an artefact and a purpose. Those interaction streams can contain pragmatic indicators such as effectiveness or hedonic indicators such as emotions or beliefs.

User experience metrics, that also befriend usability metrics might seem to be members of a complex ecosystem. This ecosystem cannot be labeled as easy; however when the evaluation itself is structured and embraced as an indispensable part of a design iteration process, it is neither complex nor always expensive to implement. The evaluation procedure prepared with metrics and methods predefined can both work in a micro or a macro interaction level. This means, one can evaluate the effectiveness of a single web page design as well as the entire chain of user flows and multiple interactions governed by that user flows that constitute the purpose of that given website. If the outcome on an evaluation of this micro and macro look to a designed artefact is recognized as a result of a magnitude, it will be misleading. The evaluation results can well indicate a magnitude of certain metrics measured, as well as the reasons behind why those magnitudes appear as is. The ‘why’ can be a root-cause or a side-effect of a cause, all of which can be interpreted with clear assumptions and hypotheses pre-defined. Often the data gathered from a user experience evaluation is blamed to be cluttered. However, in any given data gathering process, what you see by the end of a gathering process can be cluttered if you haven’t started with a solid base of assumptions such as ‘what you are evaluating’ or ‘what you are testing against what’ etc. Well-crafted assumptions in the very early stages of an evaluation process bring clarity, as such that it prevents making decisions relying on ‘gut’ feeling, which is an infamous trap behind decision making processes.

The evaluation of a user experience is not only applied to new products. It will be handy for product designers or researchers, for instance, to initiate an evaluation process for a product in its concept phase which will decrease the future failures of that product when it is released wide-spread in the market. But it is also crucial for the existing products or services to be subject to evaluation at all times to inspect the perception of a given product in the vis-à-vis expectations, bearing in mind that the user experience is ever-changing and exponentially volatile, in line with the constantly changing environment and expectations stimulated by it. The necessity to use evaluation in pre or post-design phases not only helps the designers to get feedback on the designs, but also helps businesses to have a certain comfort feeling over their decisions on those products. The nice thing about the evaluation is the variety in its methods that almost works as an a-la-carte menu: there is a method or a combination of methods that serves for different needs. You can initiate an evaluation method to experiment a new feature on a cognitive level, compare one design concept with another, inform an iteration of a user flow or discover what goes wrong functionally in a chain of services. Today, methods like AttrakDiff offer a mix of hedonic and pragmatic dimensions which is preferable for designers or decision makers, should they look for a versatile analysis.

--

--