Design for Long-Term Relevance

Richard Saul Wurman in an interview for Interior Design magazine:

One of the reasons [my firm] went out of business was the ideal piece of architecture at that time was a Michael Graves building and he ruined architecture. I know he’s dead, but when he was alive he was smart and drew well and was a nice person, but he ruined architecture because all the critics made him the king architect doing these decorative buildings that won’t even be a footnote in 20 years. I’m putting this in context. Architects are as good as their clients and what they’re demanding. So, they are doing bling buildings. Look at what just got put up by thoughtful, bright architects—I’ve met every single one of them—in Hudson Yards. The idea of Hudson Yards is that it looks good from a helicopter and New Jersey. Walking around is the opposite of Piazza San Marco. It just isn’t interesting. It’s a fiction that all the architects during the Renaissance were great. What has held up is buildings that people want to occupy.

The Portland Building in August 1982. Photo by Steve Morgan.
Image by Steve Morgan CC BY-SA 3.0 via Wikimedia

I was in architecture school at a time when Graves’ architecture was still hot. I remember poring over his beautiful drawings and thinking how much better they looked than photographs of the ensuing buildings. That was then; now, both look stale. Not the effect you want when designing something meant to be as durable as a building.

Relatively few things stand the test of time. Those that do — buildings, books, household objects, technologies, etc. — are worth paying attention to. If they remain relevant after taste and popular opinion have moved on, it’s because at some level they address universal needs.

Aspiration: design for long-term relevance. Hard to do for creatures dazzled by an endless array of new capabilities and embedded in cultures that place a premium on innovation.

10 Questions With… Richard Saul Wurman (h/t Dan Klyn)

Learning From Books

Andy Matuschak writing in his blog:

Picture some serious non-fiction tomes. The Selfish Gene; Thinking, Fast and Slow; Guns, Germs, and Steel; etc. Have you ever had a book like this-one you’d read-come up in conversation, only to discover that you’d absorbed what amounts to a few sentences?

The post offers an insightful overview of why books are a less-than-ideal means for learning new things. (It explicitly covers non-fiction and acknowledges there are other reasons to read besides learning.)

Reading this post reminded me of what has turned out to be one of the most powerful (and consequential) learning experiences of my life — and which was centered on a book. Two books, actually: Getting Started With Color BASIC and Getting Started With Extended Color BASIC, manuals that came bundled with Radio Shack’s TRS-80 Color Computer (1980). It is through these books that I acquired a superpower: programming computers.

Early personal computers couldn’t do much out of the box. When you turned one on, you were greeted by a blinking cursor. You were expected to load software to do anything useful with the device. You did this using cartridges (much like game consoles), cassette tape drives, floppy disks (which at the time were prohibitively expensive, and thus rare), or — most commonly — by typing it in yourself. (It was common at the time to see software code in computer magazines and books.)

As a result, “learning computers” meant learning to program them (mostly in BASIC, an early computer programming language.) Every popular personal computer of the time booted to a BASIC interpreter by default. Each manufacturer implemented their own dialect(s) of the language, so you needed to learn the language anew for each model.

The two Getting Started manuals taught the dialect used by the Color Computer; the first computer I ever owned. They assumed the reader was encountering BASIC for the first time — a safe assumption during the early 1980s — so they started from the very beginning, and eventually moved to the things that were specific to the CoCo platform.

This pair of books is one of the best examples I’ve encountered of how to teach a complex subject clearly, simply, and inexpensively​. Even a committed 10-year-old kid could develop some degree of mastery using only the texts (there was no internet at the time, of course); I emerged from their pages with the ability to write primitive video games.

How did they do this? Through a combination of sound structure, clear writing, and frequent and relevant interactive exercises (to the point of Mr. Matuschak’s post.) But the “committed” part is not to be underestimated: they also worked because the learner was excited by the subject and committed to learning. I devoured the Getting Started books, and revisited them often. I suspect the effectiveness (or lack thereof) of book-based learning has much to do with the degree of interest of the learner in the subject.

Why books don’t work | Andy Matuschak

Blogging as Gardening

A lovely blog post by Marc Weidenbaum:

The year 2019 is, according to Merriam-Webster, among other sources that track such things, the 20th anniversary of the origin of the word “blog.” Anniversaries are welcome opportunities to renew vows, to rejuvenate traditions, and to build on foundations.

2019 is also the 20th anniversary of jarango.com. My site started as my online business card but soon turned into a (not very active) blog. With the (coincident) disappearance of Google Reader and the rise of social media, my writing here whittled down to a couple of posts a year.

That changed a couple of years ago. I was in the final stretch writing Living in Information, and wanted to keep writing. (I know, weird.) I was also contemplating the next stage of my career as a solo consultant and was thinking about ways of getting my ideas out in the world. Also, like many others, I’d started to question the value of what I was doing in social media. It seemed to me at the time that I was creating an awful lot of content to feed somebody else’s bottom line. If I was going to be writing anyway, why not do so in my own information environment? A return to regular blogging was an obvious step under these circumstances.

Blogging is (unfortunately) an unusual enough activity these days that people often ask me why I do it. I tell them how much pleasure I get from writing (as I said, weird) and how it draws some attention to my services. But I also tell them the most important benefit I get from this blog is something Mr. Weidenbaum highlights in his post:

As Iago says in “Othello,” in a different context, “our wills are gardeners.” Blogs are gardens of ideas. (I mention gardens a lot when I talk about blogs. It’s because gardening is a key metaphor in generative music and my blog activism is a stealth campaign for generative music. Just kidding. Kinda. It’s mostly because it’s a useful metaphor for blogs, and I have a garden.)

(If you’ve read Living in Information, you’ve seen the metaphor of gardening in generative music and how we can use it to build resilience online.)

When it comes to ideas, the blog can serve as a public sketchbook — that is, one that 1) exposes ideas early and often, 2) to people with a wider variety of perspectives, so 3) the ideas can be strengthened (or discarded) through feedback. Writing here allows me to share things I’ve learned and things I’m thinking about very quickly — that is, in an unpolished state. This often results in pointers that invariably make the ideas stronger. Writing — even “quick and dirty” writing — helps me structure my thinking. I’ve often discovered what I really think about a subject by having to think about how to tell you about it.

So thank you for indulging me by reading this far. Please do get in touch if you have any thoughts on the stuff you read here.

Bring Out Your Blogs

On “Content”

Must-read post by Om Malik:

“Content” is the black hole of the Internet. Incredibly well-produced videos, all sorts of songs, and articulate blog posts — they are all “content.” Are short stories “content”? I hope not, since that is one of the most soul-destroying of words, used to strip a creation of its creative effort.

The World Wide Web is the most powerful medium for learning, sharing, and understanding our species has created. Our descendants will judge us harshly on the first thing we tried to do with it: commoditize our attention by packaging our insights and humanity into transactional units.

(The optimist’s take: It’s still early days; we haven’t yet tapped the web’s full potential.)

The Problem With “Content” — On my Om

TAOI: Podcasts on Spotify

The architecture of information:

Last week, Apple announced the end of its venerable iTunes application on the Mac. In its stead, the next version of macOS will feature three applications: Music, Podcasts, and TV.

This change is long-coming. Managing media such as music, podcasts, and movies is one of the uses of computers that requires regular folks to think like information architects; they must consider categorization schemes, the nomenclature of artist names, the integrity of their metadata, and so on. (Most people lack the language to describe the challenges in these terms, of course.) By separating media types into their own applications (as has long been the case in iOS), Apple eliminates the confusion inherent in presenting different types of stuff that nevertheless share similar structural underpinnings.

Now Spotify seems to be moving in a similar direction. According to an article on The Verge, an upcoming release of the app will more clearly separate podcasts from other types of audio types. It will also introduce a clearer structure for podcasts:

Podcasts on Spotify are currently only organized by shows that users follow, unplayed shows, and downloaded shows, which is to say it’s chaotic. In the new design, the podcast category will be broken up into three sections: episodes, downloads, and shows.

Image: Spotify
Image: Spotify

I haven’t yet seen the new design myself (nor am I a Spotify user.) However, this strikes me as yet another example of how businesses are using information architecture as a way of making their products and services more competitive. As I’ve said before, good IA is good business.

Spotify’s officially separating podcasts and music in premium users’ libraries

The Dollar Value of Metadata

As software eats more of the world, it becomes increasingly evident to people just how important it is to structure information correctly. It’s not just about finding and understanding stuff; in some cases lacking the right structure can be costly. One such case is that of music artists, which — according to an article on The Verge — are leaving billions of dollars on the table due to bad metadata:

Metadata sounds like one of the smallest, most boring things in music. But as it turns out, it’s one of the most important, complex, and broken, leaving many musicians unable to get paid for their work. “Every second that goes by and it’s not fixed, I’m dripping pennies,” said the musician, who asked to remain anonymous because of “the repercussions of even mentioning that this type of thing happens.”

Entering the correct information about a song sounds like it should be easy enough, but metadata problems have plagued the music industry for decades. Not only are there no standards for how music metadata is collected or displayed, there’s no need to verify the accuracy of a song’s metadata before it gets released, and there’s no one place where music metadata is stored. Instead, fractions of that data is kept in hundreds of different places across the world.

Although its description of what metadata is could be clearer (which I empathize with; this isn’t easy to describe to a general audience), the article does a pretty good job of highlighting some common issues that arise when organizations don’t deal with this stuff properly: a lack of standard frameworks, bad information, no clear mechanisms for collaboration, lack of agreement between the various parties involved, etc. It’s worth your attention:

Metadata is the Biggest Little Problem Plaguing the Music Industry

A Key to the Design Problem

One of my favorite presentations about design is an interview with Charles Eames, which inspired the exhibition “Qu’est ce que le design?” at the Musée des Arts Décoratifs in Paris:

Speaking for himself and his partner Ray, Eames answers questions from curator Mme. L’Amic on the nature of design. They cover lots of ground in the span of a few minutes. Eventually, they come around to the role of constraints in the design process:

L’Amic: Does the creation of design admit constraint?

Eames: Design depends largely on constraints.

L’Amic: What constraints?

Eames: The sum of all constraints. Here’s one of the few effective keys to the design problem: the ability of the designer to recognize as many of the constraints as possible; his willingness and enthusiasm for working within these constraints — constraints of price, of size, of strength, of balance, of surface, of time, and so forth. Each problem has its own peculiar list.

L’Amic: Does design obey laws?

Eames: Aren’t constraints enough?

Mme. L’Amic eventually asks Eames if he’s ever been forced to accept compromises. His reply is gold: “I don’t remember ever being forced to accept compromises, but I’ve willingly accepted constraints.”

The interview ends on an ellipsis. But before that, Eames delivers a great line:

L’Amic: What do you feel is the primary condition for the practice of design and its propagation?

Eames: The recognition of need.

The whole thing is worth your attention:

Design Q & A: Charles and Ray Eames

The Bridge Model

In a 2008 paper in ACM’s Interactions, Hugh Dubberly, Shelley Evenson, and Rick Robinson presented the Analysis-Synthesis Bridge Model. This bridge model describes how designers move from the understanding of a problem domain to a proposed solution. It’s laid out along two dimensions:

Bridge model matrix

On the left half, you have the current state you’re addressing, while the right half represents the future (changed) state. (The authors refer to “the solution, preferred future, concept, proposed response, form.”) The bottom row corresponds to tangible conditions in the world that we can observe and interact with, while the top row refers to abstract models of those things. The design process goes from the lower left quadrant — a solid understanding of conditions in the “real” world through abstraction towards a tangible construct that represents a possible future:

Bridge model

While it seems to imply a clear linear progression (something I’ve seldom experienced in real projects), this model corresponds closely to how I design — especially when dealing with complex domains. I can’t sketch tangible structures (e.g., wireframes, sitemaps, etc.) without having 1) a solid understanding of the domain and 2) models that describe it. This requires spending time dealing with abstract models — and abstraction makes people uncomfortable. Clients want to get as quickly as possible to the lower right quadrant, where they can see and interact with things that look like the thing they’ve hired me to produce. (E.g., prototypes.)

But it’s important to acknowledge that when dealing with complex systems, you’re doing clients a disservice by jumping straight to screens. You really must figure out the structures that underlie the domain first, and that requires devising models — both of the current and future states. The bridge model is a useful tool to help explain how the process works and why abstraction is important to a successful outcome.

The Analysis-Synthesis Bridge Model

Karl Popper on Definitions

Although it’s less common today, in the past, my peer community has engaged in what we call DTDT — “Defining the Damned Thing.” The term describes a discussion that devolves into the ​definition of terms. For example, a discussion about user experience design may lead someone to ask, “What do you mean by ‘experience’?” whereafter the conversation can go down a semantic rabbit hole.

Some folks have a strong aversion to DTDT. However, it’s crucial to ensure that we’re aligned on meaning — especially when we’re using relatively new terms. Despite its popularity among designers and techie folks, “user experience” is still a new term; I’d bet that most people don’t have a clear grasp of what it means. So there’s value to ensuring that everybody’s on the same footing with the language we’re using.

As my friend Andrew Hinton has eloquently written, definitions play an important role in a maturing discipline — and that necessitates these conversations. That said, there’s a flipside to DTDT: it can give the illusion that intelligent discussion is happening when, in fact, no progress is being made. I suspect this is what upsets most people who protest against DTDT.

I was reminded of this issue when I saw this clip from an interview with the philosopher Karl Popper:

In my opinion, it’s a task in life to train oneself to speak as clearly as possible. This isn’t achieved by paying special attention to words, but by clearly formulating theses, so formulated as to be criticizable. People who speak too much about words or concepts or definitions don’t actually bring anything forward that makes a claim to truth. So you can’t do anything against it. A definition is a pure conventional matter.

He goes on to expand on why he thinks definitions aren’t helpful to philosophy:

They only lead to a pretentious, false precision, to the impression that one is particularly precise. But it’s a sham precision, it isn’t genuine clarity. For that reason, I’m against the discussion of terms and definitions. I’m rather for plain, clear speaking.

That’s the goal: alignment through plain, clear speaking.

Karl Popper on Definitions (1974)