Striving for Simplicity

Over a decade and a half ago, I was at an office party. (This was during the brief part of my career when I was in-house at a large corporation.) Among other amenities, the party featured a cartoonist; the type of artist you see drawing quick, exaggerated portraits at fairs. The artist was hired to draw each team member, highlighting striking things about us: quirky hobbies, particular styles of dress or grooming, tics, etc. I don’t remember if it was at my suggestion or that of my co-workers, but my cartoon showed me surrounded by electronic gadgets: mobile phones, MP3 players (a new thing at the time), notebook computers, cameras, etc. That’s how I saw myself and how my colleagues thought of me; tech was a primary part of my identity.

I’ve long been representative of the “early adopter” demographic. Being alive (and privileged enough to have some discretionary income) during the Moore’s Law years has meant seeing (and benefiting from) tremendous advances in many areas of life. Consider the way we listen to music. In the span of a few years, my entire music collection went from heavy boxes filled with clunky cassette tapes to a few light(er) CD cases to a tiny device not much bigger than a single cassette tape. Portable MP3 players represented a tangible improvement to that part of my life. The same thing has happened with photography, movies, reading, etc. It’s been exciting for me to stay up-to-date with technology.

That said, as I’ve grown older, I’ve become more aware of the costs of new things. I’m not just talking about the money needed to acquire them; every new thing that comes into my life adds some cognitive cost. For example, there’s the question of what to do with the thing(s) it replaces. (I still have cases full of plastic discs in my attic. I’m unsure what to do with them, considering the amount of money I’ve already sunk into them.)

Continue reading

Drive-by Redesigns

My family and I recently went on vacation to a big city. On our second day there, we took one of those hop-on-off double-decker buses that show you the main sights of the city. These mass-market tours are useful for getting a sense of the overall shape of the place. At least its highlights — you get a sense for what the main areas are, where they sit relative to each other, distances between things, etc. What they’re not good at is giving you an understanding of the city: what makes it special, its history, why things are the way they are.

When you’re in one of these tours, everything about the city gets collapsed into talking points that can fit into the cadence allowed by traffic. You whizz past neighborhoods and landmarks old and modern. Dates and people blur; context collapses. There’s no sense for cause-and-effect, only facts. “This is the statue of x. It was completed in y date to commemorate the battle of z.” That’s about it — no information about why the battle was fought or why it matters to the overall history of the place. Off to the next landmark.

“What” is easy to talk about; “why,” less so. Yet why is the more important of the two — especially if your aim is to change things. What is effect; why is cause. Designers ought to give precedence to why, but we’re drawn to what. This is because we can point to what. It’s the stuff we include in our portfolios; the stuff other designers fawn over.

A couple of days ago I saw a post on social media that epitomizes this problem. The post had two images: one of a regular airline boarding pass and another of a “redesigned” boarding pass. The redesign was all surface: typographic and layout changes with no signs of understanding of the reasons why the elements in airline boarding passes are laid out the way they are.

There are reasons why boarding passes are the way they are — warts and all. For example, humans aren’t the only audience for boarding passes; they must also be legible to various machines. There are constraints around the systems that generate boarding passes and the machines that print them. None of this was acknowledged in the “new and improved” version.

Redesigning a boarding pass isn’t a simple matter of changing the layout of elements in an Adobe Illustrator artboard. The current boarding pass is a manifestation of particular contextual conditions that have informed its form. You can take a stab at the form without understanding these conditions, but the intervention won’t go beyond an exercise in aesthetics​.

That’s not to say the current state can’t be improved; in most cases, it can. The whys that led to the current what may have changed. New technologies supersede older ones, rendering them obsolete. Legal requirements change. Systems change. Improving things calls for understanding the reasons why things are the way they are. It calls for seeing beneath the surface. Alas, social media doesn’t lend itself to deeper probing. The boarding pass example isn’t unique; hang around designer circles in Medium and you’ll quickly run across unsolicited redesign “case studies.” Most are superficial and naïve.

As a medium, the tour bus establishes the pacing and structure that leads to a superficial overview of the city. Social media’s bite-sized, attention-driven structure also influences the presentation of design decisions. Unlike city tours, I don’t see much value in these drive-by redesigns. They manifest (and reinforce) a common misunderstanding of design as noun, one that ignores the process and complexity that goes into evolving form-context fit.

(Bonus points: replace “design” with “politics” in this post. The structural lack of nuance and substance in social media is a big part of why civic discourse has become so polarized.)

The Informed Life With Ariel Waldman

Episode 13 of The Informed Life podcast features a conversation with NASA advisor and communicator Ariel Waldman. Ariel recently spent some time researching microscopic life in Antartica, and she’s documented her sojourn in a series of amazing YouTube videos. Our conversation centered on this expedition and what it takes to manage and produce information in such a remote location:

I’m just old enough where I remember when pagers were a thing when I was a kid, but yeah the whole like you have to agree when you’re going to meet up with someone, but if you’re running late you really don’t have a way of telling them unless you page them or something of that nature and… Yeah it just a much slower method of doing everything. And they keep calendars and notes and notebooks with pencils so that they can erase them. And yeah, it was a very different way of organizing information there and I found myself getting a little stressed out about if plans change just how much effort you would have to put into contacting someone so that you just didn’t leave them stranded waiting for you somewhere.

Being effective in these conditions requires preparation and self-reliance. For example, Ariel took lessons in microscopy before her trip. She also had a framework for the videos which allowed her to get the shots she needed. The results are amazing.

I was inspired by our conversation; I hope you like it too. And if you like what you hear, consider supporting Ariel’s Patreon. Beside helping advance science, you could also receive one of her beautiful tardigrade prints.

The Informed Life Episode 13: Ariel Waldman on Antarctica

Mind Your Baobabs

Skillful information management requires continuous vigilance and effort. If you’re an active participant in today’s world, stuff is constantly coming in. If you don’t develop practices to keep information organized, you will soon find yourself hobbled.

I’m reminded of a powerful image from one of my favorite books, Antoine de Saint Exupéry’s The Little Prince. You’ve probably read it, but here’s a quick synopsis in case you haven’t: The book’s narrator, an aviator, crash-lands in the Sahara. Alone and running out of provisions, he desperately tries to repair his aircraft. A mysterious child appears. He keeps the aviator company, sometimes annoying him with naive/profound requests.

In the course of their conversation, the aviator realizes that the child — the titular little prince — has come from another planet. It’s a small planet, but it keeps him constantly busy:

“It’s a question of discipline,” the little prince told me later on. “When you’re finished washing and dressing each morning, you must tend your planet. You must be sure you pull up the baobabs regularly, as soon as you can tell them apart from the rosebushes, which they closely resemble when they’re very young. It’s very tedious work, but very easy.”

Why baobabs? The little prince goes on to explain:

“Sometimes there’s no harm in postponing your work until later. But with baobabs, it’s always a catastrophe. I knew one planet that was inhabited by a lazy man. He had neglected three bushes…”

The baobabs

I think of this story every morning as I work through my email inbox. There are two types of people in the world: those who let email pile up in their inbox, and those who adhere to the “inbox zero” approach. I’m in the latter camp. There’s no middle ground.

Once a day, I “tend my planet” by going through every message in my inbox. Some get archived or deleted. Some I skim and save for later reference. Some I must act on immediately. I note the rest in my to-do application for future action.

Among other things, I’m looking out for baobabs. Most emails are one-time engagements. But some hint at bigger projects. These require special care​ because there’s only so much time available for such things. Too many of thes​e and things spiral out of control.

In some ways,​ we have it harder than the little prince. Most of us have more than one inbox to tend. I deal with email, Slack, a physical inbox, two physical mailboxes, Facebook messages, LinkedIn messages, Twitter messages, and more. All require constant attention.

I know people with thousands (in some cases, tens of thousands) of emails in their inbox. I shudder when I look at their phones or computers. I wonder, how many baobab seeds are lurking in there? One or two baobabs aren’t bad. In fact, they’re what keep the machinery running. The problem is when you have too many. Sorting them out calls for constant, proactive vigilance.

You can take a vacation once in a while; get a break from the onslaught of information. But watch out! When you come back you must attend to the backlog. Diligence is the price for effectiveness and peace of mind. The alternative is always a catastrophe.

The Treachery of Deepfakes

Ninety years ago, René Magritte painted a pipe. I’m sure you’ve seen the work; it’s among his most famous. Written under the rendering of the object are the words Ceci n’est pas une pipe — “This is not a pipe.” Huh? Well, it isn’t; it’s a representation of a pipe. Clever stuff.

The Treachery of Images

The painting is called La Trahison des images — “The Treachery of Images.” Treachery means to deceive; to betray our trust. The painting tricks us by simulating a familiar object. Aided by the charming image, our mind conceives the pipe. We recall experiences with the real thing — its size, weight, texture, the smell of tobacco, etc. Suddenly we’re faced with a conundrum. Is this a pipe or not? At one level it is, but at another it isn’t.

The Treachery of Images requires that we make a conceptual distinction between the representation of an object and the object itself. While it’s not a nuanced distinction – as far as I know, nobody has tried to smoke Magritte’s painting – it’s important since it highlights the challenges inherent in using symbols to represent reality.

The closer these symbols are to the thing they’re representing, the more compelling the simulation. Compared to many of Magritte’s contemporaries, his style is relatively faithful to the “real world.” That said, it’s not what we call photo-realistic. (That is, an almost perfect two-dimensional representation of the real thing. Or rather, a perfectly rendered representation of a photograph of the real thing.)

Magritte’s pipe is close enough. I doubt the painting would be more effective if it featured a “perfect” representation; its “painting-ness” is an important part of what makes it effective. The work’s aim isn’t to trick us into thinking that we’re looking at a pipe, but to spark a conversation about the difference between an object and its symbolic representation.

The distance between us and the simulation is enforced by the medium in which we experience it. You’re unlikely to be truly misled while standing in a museum in front of the physical canvas. That changes, of course, if you’re experiencing the painting in an information environment such as the website where you’re reading these words. Here, everything collapses onto the same level.

There’s a photo of Magritte’s painting at the beginning of this post. Did you confuse it with the painting itself? I’m willing to bet that at one level you did. This little betrayal serves a noble purpose; I wanted you to be clear on which painting I was discussing. I also assumed that you’d know that that representation of the representation wasn’t the “real” one. (There was no World Wide Web ninety years ago.) No harm meant.

That said, as we move more of our activities to information environments, it becomes harder for us to make these distinctions. We get used to experiencing more things in these two-dimensional symbolic domains. Not just art, but also shopping, learning, politics, health, taxes, literature, mating, etc. Significant swaths of human experience collapsed to images and symbols.

Some, like my citing of The Treachery of Images are relatively innocent. Others are actually and intentionally treacherous. As in: designed to deceive. The rise of these deceptions is inevitable; the medium makes them easy to accept and disseminate, and simulation technologies keep getting better. That’s why you hear in the news about increasing concern for deepfakes.

Recently, someone commercialized an application that strips women of their clothes. Well, not really — it strips photographs of women of their clothes. That makes it only slightly less pernicious; such capabilities can do very real harm. The app has since been pulled from the market, but I’m confident that won’t be the last we see of this type of treachery.

It’s easy to point to that case as an obvious misuse of technology. Others will be harder. Consider “FaceTime Attention Correction,” a new capability coming in iOS 13. Per The Verge, this seemingly innocent feature corrects a long-standing issue with video calls:

Normally, video calls tend to make it look like both participants are peering off to one side or the other, since they’re looking at the person on their display, rather than directly into the front-facing camera. However, the new “FaceTime Attention Correction” feature appears to use some kind of image manipulation to correct this, and results in realistic-looking fake eye contact between the FaceTime users.

What this seems to be doing is re-rendering parts of your face on-the-fly while you’re on a video call so the person on the other side is tricked into thinking you’re looking directly at them.

While this sounds potentially useful, and the technology behind it is clever and cool, I’m torn. Eye contact is an essential cue in human communication. We get important information from our interlocutor’s eyes. (That’s why we say the eyes are the “windows to the soul.”) While meeting remotely using video is nowhere near as rich as meeting in person, we communicate better using video than when using voice only. Do we really want to mess around with something as essential as the representation of our gaze?

In some ways, “Attention Correction” strikes me as more problematic than other examples of deep fakery. We can easily point to stripping clothes off photographs, changing the cadence of politician’s speeches in videos, or simulating an individual’s speech patterns and tone as either obviously wrong or (in the latter case) at least ethically suspect. Our repulsion makes them easier to regulate or shame off the market. It’s much harder to say that altering our gaze in real-time isn’t ethical. What’s the harm?

Well, for one, it messes around with one of our most fundamental communication channels, as I said above. It also normalizes the technologies of deception; it puts us on a slippery slope. First the gaze, then… What? A haircut? Clothing? Secondary sex characteristics? Given realistic avatars, perhaps eventually we can skip meetings altogether.

Some may relish the thought, but not me. I’d like more human interactions in information environments. Currently, when I look at the smiling face inside the small glass rectangle, I think I’m looking at a person. Of course, it’s not a person. But there’s no time (or desire) during the interaction to snap myself out of the illusion. That’s okay. I trust that there’s a person on the other end, and that I’m looking at a reasonably trustworthy representation. But for how much longer?

Learn a Second Language

Do you want to become a better information architect? Learn a second language.

IA is focused on establishing distinctions. You do this with words. As a result, mastery of language is important for information architects. You master language by reading and writing — especially reading things that are outside your comfort zone. (One of the under-appreciated wonders of reading using tablets and e-readers is that they allow you to look up the definition words on the spot.) The broader your vocabulary, the more nuanced the distinctions you’re able to draw. (That said, you should avoid obscure terms when designing something for a mass audience. Not everybody will have as broad a vocabulary as you.)

But even having a broad vocabulary in one language may not be enough. Language is so foundational to how we experience reality that we can easily take it for granted. It’s the ground on which we stand. If we only know the one ground, we risk assuming everyone is standing on it. That isn’t the case.

Learning a new language forces you to realize that languages are constructs. Yes, they all have certain things in common. All languages have words for numbers, for example. But things like categorization schemes can vary significantly. Some languages have category terms that don’t exist in other languages. Some have more categories for a particular domain, others less. This video makes the point:

You can learn about these things intellectually. But you only grok the differences deeply when you must communicate with people who speak a different language. You start questioning things you’ve taken for granted most of your life, such as figures of speech and metaphors. You become aware of the historical contingencies of languages. None of the major ones have emerged fully formed; they’ve changed and influenced each other over time. And you, too, have the power to influence how they change.

Wittgenstein said that “the limits of my language are the limits of my world.” You must know the limits. This requires you to transcend them. Learning a second language — and putting yourself in a position to rely on it — pushes you beyond the limits of your mother tongue. A second language throws contrast, making the edges between distinctions visible. It’s an important skill for people who aspire to design worlds through words.

Design for Long-Term Relevance

Richard Saul Wurman in an interview for Interior Design magazine:

One of the reasons [my firm] went out of business was the ideal piece of architecture at that time was a Michael Graves building and he ruined architecture. I know he’s dead, but when he was alive he was smart and drew well and was a nice person, but he ruined architecture because all the critics made him the king architect doing these decorative buildings that won’t even be a footnote in 20 years. I’m putting this in context. Architects are as good as their clients and what they’re demanding. So, they are doing bling buildings. Look at what just got put up by thoughtful, bright architects—I’ve met every single one of them—in Hudson Yards. The idea of Hudson Yards is that it looks good from a helicopter and New Jersey. Walking around is the opposite of Piazza San Marco. It just isn’t interesting. It’s a fiction that all the architects during the Renaissance were great. What has held up is buildings that people want to occupy.

The Portland Building in August 1982. Photo by Steve Morgan.
Image by Steve Morgan CC BY-SA 3.0 via Wikimedia

I was in architecture school at a time when Graves’ architecture was still hot. I remember poring over his beautiful drawings and thinking how much better they looked than photographs of the ensuing buildings. That was then; now, both look stale. Not the effect you want when designing something meant to be as durable as a building.

Relatively few things stand the test of time. Those that do — buildings, books, household objects, technologies, etc. — are worth paying attention to. If they remain relevant after taste and popular opinion have moved on, it’s because at some level they address universal needs.

Aspiration: design for long-term relevance. Hard to do for creatures dazzled by an endless array of new capabilities and embedded in cultures that place a premium on innovation.

10 Questions With… Richard Saul Wurman (h/t Dan Klyn)

Learning From Books

Andy Matuschak writing in his blog:

Picture some serious non-fiction tomes. The Selfish Gene; Thinking, Fast and Slow; Guns, Germs, and Steel; etc. Have you ever had a book like this-one you’d read-come up in conversation, only to discover that you’d absorbed what amounts to a few sentences?

The post offers an insightful overview of why books are a less-than-ideal means for learning new things. (It explicitly covers non-fiction and acknowledges there are other reasons to read besides learning.)

Reading this post reminded me of what has turned out to be one of the most powerful (and consequential) learning experiences of my life — and which was centered on a book. Two books, actually: Getting Started With Color BASIC and Getting Started With Extended Color BASIC, manuals that came bundled with Radio Shack’s TRS-80 Color Computer (1980). It is through these books that I acquired a superpower: programming computers.

Early personal computers couldn’t do much out of the box. When you turned one on, you were greeted by a blinking cursor. You were expected to load software to do anything useful with the device. You did this using cartridges (much like game consoles), cassette tape drives, floppy disks (which at the time were prohibitively expensive, and thus rare), or — most commonly — by typing it in yourself. (It was common at the time to see software code in computer magazines and books.)

As a result, “learning computers” meant learning to program them (mostly in BASIC, an early computer programming language.) Every popular personal computer of the time booted to a BASIC interpreter by default. Each manufacturer implemented their own dialect(s) of the language, so you needed to learn the language anew for each model.

The two Getting Started manuals taught the dialect used by the Color Computer; the first computer I ever owned. They assumed the reader was encountering BASIC for the first time — a safe assumption during the early 1980s — so they started from the very beginning, and eventually moved to the things that were specific to the CoCo platform.

This pair of books is one of the best examples I’ve encountered of how to teach a complex subject clearly, simply, and inexpensively​. Even a committed 10-year-old kid could develop some degree of mastery using only the texts (there was no internet at the time, of course); I emerged from their pages with the ability to write primitive video games.

How did they do this? Through a combination of sound structure, clear writing, and frequent and relevant interactive exercises (to the point of Mr. Matuschak’s post.) But the “committed” part is not to be underestimated: they also worked because the learner was excited by the subject and committed to learning. I devoured the Getting Started books, and revisited them often. I suspect the effectiveness (or lack thereof) of book-based learning has much to do with the degree of interest of the learner in the subject.

Why books don’t work | Andy Matuschak