Fixing a broken information architecture starts with taking stock of the problem. How bad is it? What will it take to fix? How much will it cost? You can spend lots of time and money on studies to diagnose the problem, but a faster way to get rolling is to do a heuristic evaluation.
A heuristic evaluation is an assessment of the system’s current state compared to established principles and best practices. In the context of IA, you’re looking for issues that affect user-facing conceptual distinctions as they manifest in navigation and filtering elements and content labels.
Of course, you can learn about such issues by conducting usability and findability studies. However, such studies take time; they involve recruiting users, developing testing protocols, conducting interviews, and synthesizing learnings. The most obvious problems will stand out to experienced designers, so why not start there?
A small team can perform a heuristic evaluation relatively quickly. But even if the team has experience with information architecture, it helps to have a checklist of things to look for. Here’s a starter:
- Confusing menu hierarchies. Does the system give the user too many choices? Too few? Are options nested logically? Are choices clear?
- Unclear language and terminology. Does the system use plain language? Are terms appropriate for the intended audience, or do they include jargon or proprietary terms?
- Inconsistent use of terms. Does the same term appear in multiple places to mean different things? Conversely, are there several terms pointing to the same concept or functionality?
- Broken navigation links. Do any links lead to 404 pages or the wrong location?
- Inappropriate information hierarchies. Is content laid out logically, using headings and subheadings? Are pages scannable?
- Legibility problems. Is information easy to read? Is there enough contrast? Does the system provide ways for differently-abled people to access information?
- Incongruous visuals. Do non-textual elements (e.g., graphics and animations) complement and clarify concepts or do they add noise and distraction?
- Lack of cohesiveness. Does the overall system make sense as a system or does it read as a collection of mostly unrelated parts?
When starting the evaluation, you must first determine who will be on the team. Look for experienced folks who bring diverse perspectives to the study. You want to cover as much ground as possible but keep the team relatively compact.
Once you’ve determined who’s on the team, define what parts of the system they’ll evaluate. Is it the whole thing or only some parts? Ensure the team has a complete list of the screens or pages to evaluate. (That said, give them leeway to look at relevant screens that might have eluded the initial list.)
The team can then systematically navigate through the system, comparing what they see in there against their guidelines. Team members should note anomalies, taking care to capture the URIs or screen names where they found issues.
Finally, when they’ve gone through the entire system, the team should report back, noting any patterns or obvious issues they found. Findings should be presented as actionable changes and improvements; feedback should be dispassionate and constructive.
Of course, the heuristic evaluation won’t be the final word on what’s wrong with the system; it’s just one of several study methods available. And you should be especially cautious when having internal designers evaluate the system; they might be too close to the work to see its flaws.
That said, an objective IA heuristic evaluation provides a great start to any UX redesign project — especially when setting out to fix systems with findability or understandability issues. You can make much progress by giving a small team of experts time to look at the system critically with an eye toward obvious fixes.
A version of this post first appeared in my newsletter. Subscribe to receive posts like this in your inbox every other Sunday.