A Data Primer for Designers

My friend Tim Sheiner, writing for the Salesforce UX blog:

demand is high for designers who can create experiences that display data in useful and interesting ways. In my personal experience this became much, much easier to do once I’d learned to speak the crisp, precise and slightly odd language used by technical people for talking about data.

What follows is a phenomenal post that clearly explains much of what you need to know to understand and speak competently about data. A must-read for anybody involved in designing for digital information environments.

Designer’s Field Guide to Data

Uses for YouTube

YouTube has long been in the “guilty pleasure” category for me: a source of vacuous entertainment. There’s the hit of nostalgia upon discovering old episodes of a show you enjoyed as a child, vicarious consumption through unboxing videos, the mildly voyeuristic thrill of peeking down other people’s rabbit holes. While enjoyable, I’ve always felt somewhat guilty about these uses for YouTube; it’s been a (mostly) pleasant, if not entirely harmless, waste of time.

But something has changed recently: I’ve found myself getting real value from YouTube. Instead of (or rather, in addition to) turning to the platform for mindless distraction, I’m coming to it more for task-specific training. For example, yesterday I learned how to mend a pair of jeans that had a hole in them. I’ve also used YouTube to learn about the characteristics of different types of fountain pen inks, the proper form for a yoga pose I find particularly challenging, how to play one of my favorite songs (Rush’s Subdivisions) on the piano, and critical information that helped me with various work projects.

Which is to say, I’m increasingly using YouTube not just for entertainment, but also for education. Learning these things in video format has been much more efficient than doing so by other means. I can see what the other person is showing me, rewind, pause, replay to go at my own pace. There are often several options to choose from, with varying levels of skill. (Skill at both the activity I’m trying to learn and capability of the presenter as an instructor.)

Most of these educational videos aren’t slickly produced by professional educators, but by individuals who are sharing their passions. They often make up for their lack of professionalism and structure with charm and passion. In short, they’re educational and entertaining. But it’s a new type of entertainment, very different from the prime time TV programming of old.

YouTube offers an ad-free tier called YouTube Premium. I’ve long resisted paying for it given how many other streaming entertainment channels I’m already paying for. But thinking about how I’m using these things, I’ve decided to give it a go. If I had to choose between two paid streaming services, should I go with the one that only shows me slickly produced movies and TV shows, or should I go with the one where I’ll be learning useful life skills?

(One complaint I have about YouTube Premium right now is that it seems to aspire to become another “just entertainment” medium. Rather than foist second-tier movies on me, I wish it’d be better at helping me discover new things to learn.)

Wikipedia as Information Infrastructure

Wikipedia is more than a publication. As I point out in Living in Information, Wikipedia is also the place where this publication is created. At its scale, it couldn’t happen otherwise. But Wikipedia is more than that: increasingly, it’s also a key part of our society’s information infrastructure. Other systems increasingly rely on it for the “authoritative” versions of particular concepts.

This works well most of the time. But it’s not perfect, and can lead to weird, unexpected consequences. For example, a Wikipedia entry is part of the reason why Google says I’m dead. More recently, a Wikipedia hack led to Siri showing a photo of a penis whenever a user asked about Donald Trump. While the former example is probably due to bad algorithms on Google’s part, the latter seems to be a fault with Wikipedia’s security mechanisms.

The people who manage Wikipedia are in an interesting situation. Over time they’ve created a fantastic system that allows for the efficient creation of organized content from the bottom-up at tremendous scale. They’ve been incredibly successful. Alas, with success comes visibility and influence. The more systems there are that depend on Wikipedia content, the more of a target it becomes for malicious actors.

This will require that the team re-think some of the openness and flexibility of the system in favor of more top-down control. How will this scale? Who will have a say on content decisions? How will Wikipedia’s governance structures evolve? These discussions are playing out right now. Wikipedia is a harbinger of future large-scale generative information environments, so it behooves us all to follow along.

The Eponymous Laws of Tech

Dave Rupert has a great compendium of “Laws” we frequently encounter when working in tech. This includes well-known concepts like Moore’s Law, Godwin’s Law, and Dunbar’s Number alongside some I hadn’t heard before, such as Tessler’s Law:

"Every application must have an inherent amount of irreducible complexity. The only question is who will have to deal with it."
Tessler’s Law or the “Law of Conservation of Complexity” explains that not every piece of complexity can be hidden or removed. Complexity doesn’t always disappear, it’s often just passed around. Businesses need to fix these complex designs or that complexity is passed on to the user. Complex things are hard for users. 1 minute wasted by 1 million users is a lot of time where as it probably would have only taken a fraction of those minutes to fix the complexity. It cringe thinking about the parts of my products that waste users’ time by either being brokenly complex or by having unclear interactions.

Good to know!

The Eponymous Laws of Tech

Kranzberg’s Laws of Technology

Michael Sacasas explains Kranzberg’s Six Laws of Technology, “a series of truisms deriving from a longtime immersion in the study of the development of technology and its interactions with sociocultural change”:

  1. Technology is neither good nor bad; nor is it neutral.
  2. Invention is the mother of necessity.
  3. Technology comes in packages, big and small.
  4. Although technology might be a prime element in many public issues, nontechnical factors take precedence in technology-policy decisions.
  5. All history is relevant, but the history of technology is the most relevant.
  6. Technology is a very human activity—and so is the history of technology.

A nuanced take on technology’s role in shaping our lives and societies.

Kranzberg’s Six Laws of Technology, a Metaphor, and a Story

The Allure of Novelty

It’s that time of year again: Tech companies are announcing new products in preparation for the holiday season. Over the past month, a slate of new phones, tablets, computers, and accessories have been announced. You may be considering buying one or more of these new devices. It’s worth thinking about whether or not you really need them.

As an Apple customer (and something of a gadget junkie), I’ve been intrigued by the new Apple Watch and the new iPad Pro. I already own earlier editions of both devices and was perfectly happy with them just a few months ago. But now I’m not. Now, when I look at my Apple Watch, I wonder: what if I could use it to play podcasts when I go running? What if its battery lasted the whole day? What if it was a little bit faster? What if… ? I know about the newer model, and can’t help but think about all the neat things it can do that mine can’t.

The iPad is a different story. While the new one looks quite nice, it’s not as clear to me how it would make my life better in ways the one I own can’t. Most of the new models’ features seem to be cosmetic: larger screens, smaller bezels, slightly different form factors, etc. Perhaps the new models are also a bit faster, but not in ways that would make much difference; my current iPad is plenty fast. The new Apple Pencil—the accessory I use most with the iPad—also looks much nicer than the old one, but seems functionally similar to the one I already own.

Would it be cool to have new devices for the holidays? Sure, it’d be fun. But it’s worth considering the tradeoffs that come with them. The most obvious, of course, is money. These things aren’t cheap! But there’s also the time they require: Time to research what to buy, time to set things up/migrate from older devices, time dealing with support if things go wrong. (I purchased a MacBook Pro earlier this year, and it’s already been back to Apple for service four times!) New tech can be quite a time sink.

How do you determine if the tradeoffs are worth it? For me, it comes down to figuring out whether I really need a new device or not. These questions help:

  • Does the new model enable me to do something I currently can’t?
  • Does the new model enable me to do something I can do now, but significantly faster or more efficiently?
  • Is there something I already own (or have access to) that could help me accomplish similar results, even if a little less conveniently?
  • Do I have the money/time to mess around with this stuff now? Or are there other things that require my money/attention with more urgency?
  • What do the new devices do worse than the old ones? (I.e., there are a few things about the new iPads that work better in the model I currently own!)
  • Am I using my current devices to the fullest of their capacity?

Novelty can be very alluring, especially during this time of year when advertising is in full force. But when I reflect upon these questions, I often realize that I may be better served by keeping my current devices longer and investing my time and money in other things.

Mobile Computing at a Different Level

There are many ways in which people use computers. (I’m talking about all sorts of computers here, including smartphones and tablets.) Some people’s needs are very simple; they may use the machines merely to stay connected with their world. Other people have more complex needs. You can organize these various uses on a continuum that ranges from least to most powerful. Consider at least three levels:

  1. Accessing Content: Computer is used primarily to find information on the internet. Users at this level interact with others through email or social networks, but do so lightly. They spend the bulk of their on-device time accessing content created by others. Many casual users are on this level; it’s also where they have the least power.
  2. Creating Content: In addition to the previous level, computer is also used as a content-creation device. While users at this level may spend a considerable amount of time accessing content created by others, they also produce lots of content of their own. Many people who use computers professionally are on this level.
  3. Tweaking Workflows: In addition to the previous two levels, the computer is also used to modify how the computer works. This includes enabling new workflows through programming or scripting. This level affords most users the most power.

(There’s an even higher level, which is closer to the machine’s metal and affords a very small subset of people tremendous power. I’m not going to get into that here; I’m concerned with how most of us interact with these devices.)

Consider a transportation analogy. On level one, you are a passenger in public transport. On level two, you are driving a car. On level three, you are a mechanic, capable of making modifications to the vehicle to fix it or improve its performance. As with transportation, the higher the level, the more complexity the user must deal with.

Which level are you? If you’re like most people, your at either levels 1 or 2. This is OK; very few people take advantage of level 3. Learning to program requires great effort, and for most uses the payoff may seem to not be worth the investment of time required.

I was around eight years old when I first interacted with a computer: a TRS-80 Model I. As with most machines of this vintage (late 1970s), when you sat down in front of a Model I you were greeted by a command prompt:

Image: https://picclick.com/TRS-80-Radio-Shack-Tandy-Model-1-Video-Display-323191180180.html
Image: https://picclick.com/TRS-80-Radio-Shack-Tandy-Model-1-Video-Display-323191180180.html

The computer could do very little on its own. You needed to give it commands, most often in the BASIC programming language. (Which incidentally just turned 50.) So level 3 was the baseline for using computers at this time. We’ve come a long way since then. Most computers are now like appliances; you don’t need to know much about how they work under the hood in order to take advantage of them. However, knowing even a little bit about how they work can grant you superpowers.

Level 3 has come a long way from the days of the TRS-80. I’ve been messing around with the new Shortcuts functionality in iOS 12, and am very impressed with how easy it is to string together several apps to accomplish new things. For example, the Home ETA shortcut strings together the Apple Maps, Contacts, and Messages apps. When you install the shortcut, you configure it with your home’s street address and the contact information of the person you want notified. When you activate the shortcut (which you can do through various means, including an icon on your home screen), Apple Maps figures out your current location and uses it to calculate how far you are from home. Then it passes that information to Messages, which then sends your estimated time of arrival to your selected contact.

It’s not a mind-blowing functionality, but the fact that iPhones and iPads can do this at all is impressive. iOS users can now create arbitrary connections between components of the system, opening up possibilities that were previously difficult or impossible. Shortcuts also promises to make these much better as productivity tools. It’s the old Unix “small pieces loosely joined” philosophy — but in a platform designed to be less of a computer than an appliance. It opens up level 3 possibilities for level 1 and 2 users, without asking that they become mechanics.

The Tricorder on Your Wrist

I bought the first generation Apple Watch (colloquially known as the “Series 0”) when it came out. Doing so was a measured leap of faith; it wasn’t entirely clear to me at the time what the Watch was for. Most of its features were things I could already do with my iPhone, albeit a bit less conveniently. Track my runs? Check. Show notifications? Check. Play music? Check. Tell the time? Check. Then there was the inconvenience of having another device to charge and the expense of periodic hardware upgrades.

Still, as a digital designer and strategist, it’s important for me to be up to date on form factors and technologies. I also trust Apple. So I bought the watch and went all in, using it daily to track my activity. Although I’ve grown to really like the Apple Watch, I haven’t seen it as an essential part of my everyday carry kit like the iPhone is. I can easily make it through a day without my watch, which is not something I can readily say about my phone.

To Apple’s credit, they’ve improved the product tremendously over the past three years. (Sometimes by making major changes to fundamental interactions in its operating system, which was somewhat awkward at launch.) Even though it’s rather slow now, and its battery doesn’t last as long as it used to, my Watch is better today than when I bought it. (A notable example: I use it dozens of times every day to automatically log into my Mac, a time saver.) Apple has also released subsequent iterations of the hardware that have added significant improvements such as GPS tracking and a cell radio. Still, I’ve resisted the impulse to upgrade. The Watch is not an inexpensive purchase (I prefer the stainless steel models), and as said above, I haven’t thought of it as indispensable.

Continue reading

If These Walls Had Ears

In early 1896, the Lumière brothers exhibited one of the first motion pictures ever made: THE ARRIVAL OF A TRAIN AT LA CIOTAT. With a run time of less than a minute, THE ARRIVAL OF A TRAIN isn’t long. It also has a straightforward premise: the movie consists of a single stationary shot of a steam train pulling into a station, and the subsequent disembarkment of passengers. The shot is composed so the camera points down the track, with the locomotive coming towards it. You can see the film here:

THE ARRIVAL OF A TRAIN is famous not just because it was the first movie shown in public; it’s also famous because of the legend that’s grown around it. Supposedly, the first showings caused audiences to panic, with some people scrambling to the exits. Unaccustomed to moving pictures, these early movie-goers somehow thought there was a real train barreling towards them, and ran for their lives.

Whether this happened exactly as described is inconsequential. The story speaks to the power of the motion picture medium to conjure illusions and has therefore become enshrined as the founding myth of cinema. It also speaks to how information can alter our sense of place, especially when we’re interacting with it in novel ways. As such, it’s a good analog for some uncanny experiences we are encountering today.

Recently, a Portland woman named Danielle received a call from one of her husband’s employees. “Unplug your Alexa devices right now,” this person said. “You’re being hacked.” The employee then described in detail a conversation that had happened earlier inside Danielle’s home. Apparently, the family’s Amazon Echo device was recording their conversations and sharing them with others.

In the subsequent investigation of the incident, Amazon’s engineers concluded that somebody had uttered a particular set of phonemes during the conversation that the Echo interpreted as its activation command, followed by a command to send a message to the person who then received the recordings. In other words, it wasn’t a hack; it was an unintentional triggering of one of the Echo’s features. (You can read about this story on The Verge.)

I can’t help but wonder ​how this incident has altered this family’s relationship with the physical environment​ of their home. When people first experienced THE ARRIVAL OF A TRAIN at the end of the 19th Century, they had never seen anything like it — except in “real life.” The first audiences were inexperienced with the new information delivery medium, so it’s understandable that they felt confused or even panicky. Whatever their reaction was, undoubtedly their experience of being in a particular place was radically transformed by the experience.

Even now, over 120 years later, it still is. Think about the last time you went to a movie theater. The experience of sitting in a movie theater is very different before and after the movie is playing. How long does it take for you to stop being conscious of the physical environment of the theater as you become engrossed by the film? (This is one of the reasons why contemporary movies are preceded with reminders to turn off your electronic devices; you’re there to draw your attention away from our physical reality for a couple of hours, and you don’t want anything yanking it back.)

Always-on smart devices such as the Echo, Google Home, and Apple HomePod change the nature of our physical environments: They add an information interaction layer to the place that wasn’t there before you turned on the cylinder in the room. Unlike a movie, however, these devices aren’t designed to capture your attention. In fact, these devices are designed to be unobtrusive; you’re only meant to be aware of their presence when you summon them by issuing a verbal command.

One can only assume that the form of these things is a compromise with the constraints imposed by current technology and the laws of physics. The ideal form for this class of devices is completely invisibile; we want them to be perceived not as devices at all, but as a feature of the environment. But is this really the ideal? Is it desirable for our physical environments to be always listening to us in the background?

Partly due to their design, we’re responding to these smart cylinders in a way that stands in stark contrast to how we received THE ARRIVAL OF A TRAIN. Instead of panicking and running out of the room, we’re placidly deploying these instruments of contextual collapse into our most intimate environments. What does the possibility of inadvertent broadcast do to our ability to speak frankly with each other, to rage with anger, to say sweet, corny things to each other, to share with our kids the naughty delight of “pull-my-finger” jokes?

Those panicky Parisians of 1896 would run out of the theater to a perfectly ordinary street, no threatening locomotive in sight. I bet they initially felt like fools. Soon enough, the novelty would pass; eventually, they’d be able to sit through — and enjoy — much longer, more exciting film entertainments. What about us? Is panic merited when we discover our rooms have ears and that others can listen to anything we say? Will we be able to run out of these rooms? How will we know?