The Value Of Eye-Tracking Software In Medical Device Usability Testing
By Mahajabin Rahman, Design Science
While no straightforward formula exists for making user-friendly medical devices, we do have an understanding of the key variables that come into play. In usability testing, we assess these variables through observations and interviews. Observations establish the facts on-the-ground (e.g., task performance, timing), while interviews serve to contextualize that information, offering some of the why behind the how. Although testing will not completely solve the problem of fitting users to their products, it does help to simplify design inputs by noting deterministic patterns in user performance.
However, reliance on human observation and user feedback to identify these patterns carries obvious risks and margin for error. If our method for collecting data is recording what we see or what we hear, then we may be missing data points that are imperceptible to the observer. Importantly, observers must also be vigilant about the potential for bias when deciding what information to collect. From the user’s side, reliance on participant feedback can be similarly problematic, because participants do not always know what they do not understand. Such limitations hint not only at a pool of latent data that may go unexplored, but also at an opportunity for making the process of evaluation more objective.
The easiest way to mitigate user and tester biases is to automate the subjective portions of data collection, while eye-tracking software and other technologies can help to capture those pieces of latent data that may elude the human eye. Use of eye-tracking software in usability testing offers researchers, engineers, and designers a bird’s eye view of the user experience. It shows them what the user is looking at and when, allowing for targeted and efficient probing to contextualize performance data. Eye tracking not only offers more data from which to draw conclusions, it also provides a consistent way of classifying the data collected. By combining automation and eye tracking, we can start to map out the user’s experience step-by-step.
In fact, eye tracking and automation have a subtle, yet fascinating relationship. When using eye tracking to assess the usability of a product, we are figuring out a user’s subjective taste, in a way. Yet, when automating, we have to create the most objective flow of operations. As we construct this scenario, we find ourselves tracing the user’s most fundamental experience, which is otherwise difficult to discern. Eye tracking is helpful in this case, because it helps us peel back that layer of information that is nearly inseparable from the user. So, how exactly does eye tracking inform automation potential?
Constraints Of Eye-Tracking Data Types
It’s necessary to know the nature of eye-tracking data in order to appropriately perform mathematical operations using that data. After all, the raw output of eye tracking data is numbers; but without understanding if the numbers are measuring the same thing, we cannot add or subtract. Since eye-tracking output can only be expressed in two types, it easily fits the mold of automatic computation. These two types, or metrics, are fixations and saccades: Brief pauses in the movement of the eyes, and movements between the pauses, respectively.
In watching eye movements, fixations tend to indicate some kind of attention — the reader clearly did not forget to read over a certain word or look at a certain graphic, but how do we know if they are processing the object? Essentially, fixations tell us what spatial coordinates the user spent some amount of time looking at, which may indicate distraction, confusion, or engagement — all very dissimilar modes, but equally valuable for the researcher to be aware of.
Likewise, there are many types of saccades, since a user’s eyes can move in any direction for a huge variety of reasons. For example, a user can read a word and then move backward to an earlier part of the sentence. On the one hand, this could mean that they are revisiting the earlier part of the sentence to correct a misunderstanding. On the other hand, such a backwards movement could easily be arbitrary. Eye tracking provides an objective record of the user’s behavior; however, understanding this behavior’s context remains the task of the interviewer.
The semantics of these data types aside, fixations and saccades each give us two types of information. Fixations give us coordinates and time, while saccades give us direction and movement length. If each fixation serves as a coordinate, the distance between different fixations can be used to compare them, and time can be combined or subtracted to quantify the difference in fixation times. However, there are more constraints with the operations that can be performed on saccades. We can only classify them as backward and forward, usually, depending on the format of the visual. While eye tracking provides a large volume of data, most of it is meaningless without context. In using context, the proper constraints of usability can determine what relationships between the variables are preserved.
Eye-Tracking Data Relationships And Context
Although viewing collective data at a glance can give some insight into what users looked at most, it’s possible to glean even more insight by seeing how well the data fits the mental model the designers had in mind. This mental model, or visual hierarchy, is the order in which humans process the elements of a given scene. The theory of visual hierarchy is incorporated in nearly every interface — the most crucial elements in both text and imagery are emphasized through scale, or some kind of stylization that stands out from the “background.” Out of all interfaces, every component of a medical device — from the device itself to its packaging and instructions for use (IFU) — should be specially arranged around the user, due to the high-stakes nature of the use process.
One way to use eye-tracking data is to map out the intended visual hierarchy and compare it to the one employed by users in testing. The deviation between the two hierarchies, actual and expected, can be used as an overarching evaluation of the design’s effectiveness. In this case, the automated portion of the process would consist of calculating the distance from the expected fixation to the user fixation, since this is both tedious and susceptible to human error.
Putting It All Together
After the visual hierarchy breaks down the data, studying the fixations and saccades within each level of the hierarchy becomes much more manageable. Combined with performance scores and user feedback, eye-tracking data can show which fixations led to the intended mode of thinking, and which fixations did not, by tagging the data accordingly. The eye-tracking data compared across participants for each hierarchy can show us where most fixations occurred, and if it happens that participants are looking at the same incorrect region of a visual, then clearly the design needs to change. It is also helpful to automatically tag irrelevant and relevant fixations, or those which led to correct and incorrect answers, respectively, by each task, in order to get an idea of how participants were collectively thinking when answering correctly or incorrectly.
Saccades can add more depth to this data by being categorized as “backwards” and “forwards” saccades. Within correct and incorrect fixations, seeing the number of disturbances (or backwards movements) between the fixation points makes it more manageable to gauge how easily the information was absorbed.
Does It Pass The Eye Test?
Conventional eye-tracking results provide greater amounts of data, with less reliance on the observer and participant feedback. However, without context, these results are ambiguous and remain susceptible to the same human errors that can affect any observational data. We will never explain away cognitive processes with eye tracking, and subjective information will always remain a variable. However, the way that subjective information is assessed is algorithmic.
We can understand the data through certain constraints. Luckily, there are only two types of eye-tracking data, and each has its defined set of constraints and context (localizing the points based on task). Once we know these two things about the data points, we can perform an appropriate set of operations on them, and this is where automation fits in. As the information multiplies, it's harder and more tedious to maintain a consistent mental algorithm, so we can turn to automation to help operate on each data point. Ultimately, this can give us a number that measures how well a user's viewing experience aligned with our own.
About The Author
Mahajabin received a BA in physics from the University of Chicago. She has experience in a range of research applications and has collaborated with multiple teams on computer vision and machine learning projects. As a Data and Systems Analyst, Mahajabin is responsible for programming automation scripts and helping with ethnographic research analysis.