Mobile Computing at a Different Level

There are many ways in which people use computers. (I’m talking about all sorts of computers here, including smartphones and tablets.) Some people’s needs are very simple; they may use the machines merely to stay connected with their world. Other people have more complex needs. You can organize these various uses on a continuum that ranges from least to most powerful. Consider at least three levels:

  1. Accessing Content: Computer is used primarily to find information on the internet. Users at this level interact with others through email or social networks, but do so lightly. They spend the bulk of their on-device time accessing content created by others. Many casual users are on this level; it’s also where they have the least power.
  2. Creating Content: In addition to the previous level, computer is also used as a content-creation device. While users at this level may spend a considerable amount of time accessing content created by others, they also produce lots of content of their own. Many people who use computers professionally are on this level.
  3. Tweaking Workflows: In addition to the previous two levels, the computer is also used to modify how the computer works. This includes enabling new workflows through programming or scripting. This level affords most users the most power.

(There’s an even higher level, which is closer to the machine’s metal and affords a very small subset of people tremendous power. I’m not going to get into that here; I’m concerned with how most of us interact with these devices.)

Consider a transportation analogy. On level one, you are a passenger in public transport. On level two, you are driving a car. On level three, you are a mechanic, capable of making modifications to the vehicle to fix it or improve its performance. As with transportation, the higher the level, the more complexity the user must deal with.

Which level are you? If you’re like most people, your at either levels 1 or 2. This is OK; very few people take advantage of level 3. Learning to program requires great effort, and for most uses the payoff may seem to not be worth the investment of time required.

I was around eight years old when I first interacted with a computer: a TRS-80 Model I. As with most machines of this vintage (late 1970s), when you sat down in front of a Model I you were greeted by a command prompt:

Image: https://picclick.com/TRS-80-Radio-Shack-Tandy-Model-1-Video-Display-323191180180.html
Image: https://picclick.com/TRS-80-Radio-Shack-Tandy-Model-1-Video-Display-323191180180.html

The computer could do very little on its own. You needed to give it commands, most often in the BASIC programming language. (Which incidentally just turned 50.) So level 3 was the baseline for using computers at this time. We’ve come a long way since then. Most computers are now like appliances; you don’t need to know much about how they work under the hood in order to take advantage of them. However, knowing even a little bit about how they work can grant you superpowers.

Level 3 has come a long way from the days of the TRS-80. I’ve been messing around with the new Shortcuts functionality in iOS 12, and am very impressed with how easy it is to string together several apps to accomplish new things. For example, the Home ETA shortcut strings together the Apple Maps, Contacts, and Messages apps. When you install the shortcut, you configure it with your home’s street address and the contact information of the person you want notified. When you activate the shortcut (which you can do through various means, including an icon on your home screen), Apple Maps figures out your current location and uses it to calculate how far you are from home. Then it passes that information to Messages, which then sends your estimated time of arrival to your selected contact.

It’s not a mind-blowing functionality, but the fact that iPhones and iPads can do this at all is impressive. iOS users can now create arbitrary connections between components of the system, opening up possibilities that were previously difficult or impossible. Shortcuts also promises to make these much better as productivity tools. It’s the old Unix “small pieces loosely joined” philosophy — but in a platform designed to be less of a computer than an appliance. It opens up level 3 possibilities for level 1 and 2 users, without asking that they become mechanics.

OneNote

You’re likely to run across lots of information during your day. Much of it is disposable, but some you’ll probably need to refer to in the future. A lot of it might be useful someday, but you just don’t know right now. Given how easy it is to search digital information, and how cheap storage is these days, you may as well keep it. I’ve long experimented with “digital junk drawer” applications for this use. I’ve tried Evernote, Yojimbo, Google Keep, Apple Notes, and Org Mode for Emacs, but my favorite thus far is Microsoft’s OneNote.

I keep a lot of stuff in OneNote: clips from web pages, quotes from famous people, impressions from books I’ve read, ideas for future presentations, meeting minutes, half-formed thoughts, etc. OneNote provides easy means to clip snippets of information from web pages and other apps whether I’m on my Mac, iPhone, or iPad. This makes it possible for me to keep a central repository of things I’m learning as I go about my day. It all syncs through Microsoft’s cloud, so all three devices have the latest information on them.

But OneNote is more than just a scrapbook for me: It’s also where I keep my projects organized. Whenever I start a new project, I open a new notebook in OneNote devoted exclusively to it. OneNote notebooks can have “sections” in them. Most of my projects have at least two sections: “Notes” (random notes, including scribbles to myself) and “Meetings,” where I record meeting minutes. Some notebooks also have other sections, such as “Admin” and “Research.” I aim for consistency with the naming and color schemes I use to differentiate these subjects. This allows me to quickly make sense of what I’m looking at when I switch projects.

Continue reading

Twitter and Third-party Apps

Yesterday, Twitter implemented significant changes to its APIs. As a result, accessing Twitter through third-party apps like Twitterrific and (my favorite) Tweetbot is now much worse. For example, one of my favorite Tweetbot features was its “Activity” tab, which gave me information about how people were interacting with me in Twitter. Now, it’s gone.

For me, this is not a trivial change. Twitter is my primary social network; I spend lots of time there. Or rather, I should say I spent time there. The change is making me rethink how much of my attention I apportion to this place. You see, it turns out I don’t like being in Twitter as much as I like being in Tweetbot. There are several reasons why.

To begin with, Tweetbot has native apps for both operating systems I use day-to-day (macOS and iOS.) These apps are coherent (if not 100% consistent) between both platforms: I can easily move between one and the other. Twitter, on the other hand, has an iOS app but discontinued its first-party macOS app earlier this year. So accessing Twitter on the Mac means either using the twitter.com website or through a third-party app like Tweetbot.

The timeline — the main component of the Twitter experience — is also significantly different between Tweetbot and Twitter. Whereas the former presents a simple chronological list of items, the latter scrambles the order of tweets based on what it deems to be interesting to me. Parsing out what I’m looking at (and why) is more work than I want to put into it.

Another major difference between the two is that Tweetbot doesn’t show “promoted” tweets. (Read: ads.) That means that the posts I see are the ones I signed up for by following particular accounts, not ones that paid for the privilege of being brought to my attention. (I suspect that herein lies the primary driver behind the change to Twitter’s API; ads is how the company makes money.)

The bottom line: Twitter is a lot less compelling to me today than it was two days ago. I will probably be spending less time there. But where was it that I was spending my time? Am I a Twitter user or a Tweetbot user? While the two share a lot in common, they’re different information environments. While the underlying information is the same, the experience of the environments is very different. I like being in Tweetbot, less so being in Twitter.

And let’s look at this from Twitter’s perspective: the company will probably notice that I’m spending less time there, but will this affect their revenue? After all, I didn’t see many ads while accessing their system through a third-party client. So I understand why they’d want alternate-reality versions of Twitter — like the one Tweetbot offered — to go away in the near-term. But what does this mean for them in the long term, if it costs them loyal users like me?

Changing Your Personal Information Environment

Some people who do most of their work with computers also have some control over how that work is done. For example, as an independent information architect, I am my own IT department; I choose what tools I use. At this stage in my career, I’m proficient with most of them. Still, it’s important to occasionally​ look around for more efficient/effective ways of doing things.

Changing key components of your personal information environment is not something to undertake lightly. There are costs to doing so. The least onerous is the cost of the software itself; the big investment is in time spent learning new workflows and migrating to the new tool.

The various components of your personal information environment sit on a stack. At the bottom of the stack — the foundational layer — is your OS platform of choice. In my case, this is macOS. I’ve been using Macs for almost thirty years, changing to another platform (Windows, for example) would be tremendously costly.

Switching components higher up in the stack would be less onerous. For example, although I use Gmail for my email needs, I access it using Apple’s Mail.app. I could change mail clients fairly painlessly; I’d just need to point the new application to my Gmail accounts. Yes, I’d lose some functionality in the process (e.g., links to individual Mail.app messages from OmniFocus), but there’s not much work I’d need to do other than learn the new application. So if a new mail client comes along that is radically better than Mail.app, I’d be willing to give it a spin.

I’m currently testing an application that would replace one of the foundational layers of my information environment: OneNote. I’ve used OneNote as my note-taking and information-gathering system for many years. I have many dozens of notebooks in OneNote, and have internalized various workflows around this app. Changing this layer of my stack would come at a considerable cost.

Are big changes such as this one worth it? That depends on whether the new tool allows you to do important things that the old tool won’t, or allows you to do similar things significantly better/faster. To be worth it for me to switch from OneNote, I’d need to see orders-of-magnitude improvements. Alas, it’s difficult to evaluate worthiness without extensive testing, and that in itself is a big time sink. That said, there are also significant opportunity costs to continuing to use a tool that may be less efficient/effective.

Making time to experiment with new components in your personal information environment can open up new possibilities; it can make you more efficient, and even give you new superpowers. But undertaking such changes is not something to be taken lightly, as it can come with significant costs. Sometimes, leaving well-enough alone is the wiser choice.

Design and Implementation Trade-offs

A couple of days ago I wrote about how important it is for designers to know their materials. The material for interaction designers is code, so a baseline understanding of what code can and can’t do is essential for designers to be effective.

I learned this principle in one of my favorite books: The Art of Computer Game Design, by Chris Crawford (Osborne/McGraw Hill, 1984). Crawford was one of the early Atari game designers/implementors. (I use the slash because the distinction wasn’t as clearly drawn then as it is now.) His book lists seven design precepts for computer games. The seventh of these is titled “Maintain Unity of Design Effort,” and includes the following passage:

Games must be designed, but computers are programmed. Both skills are rare and difficult to acquire, and their combination in one person is rarer still. For this reason, many people have attempted to form design teams consisting of a nontechnical game designer and a nonartistic programmer. This system would work if either programming or game design were a straightforward process requiring few judicious trade-offs. The fact is that both programming and game design are desperately difficult activities demanding many painful choices. Teaming the two experts is rather like handcuffing a pole-vaulter to a high jumper; the resultant disaster is the inevitable result of their conflicting styles.

More specifically, the designer/programmer team is bound to fail because the design will make unrealistic demands on the programmer while failing to recognize golden opportunities arising during programming.

Crawford illustrates this by using a couple of examples from his career. One that’s stuck with me comes from the development of the game EASTERN FRONT 1941, a war game for the early Atari 8-bit computers. While he was programming the game (which he’d also designed), Crawford spotted an opportunity: a simple addition to its calendar routines would allow color register values to change as game time progressed. This allowed the color of trees to change to reflect the seasons. A minor detail for sure, but one that added depth to the experience. (Keep in mind that programming for these early computers meant always optimizing for limited memory. This minor change came at the expense of only 24 bytes of computer memory; a “cost-effective improvement” in Crawford’s words.)

Software development is much less painful today than it was in the late 1970s and early 1980s. Still, limited budgets and timeframes call for trade-offs. Knowing where the opportunities and constraints are helps when you’re called to decide what to include and exclude in the work.

Tools of the UX Trade

The tools we use when we design have an important influence in the work we produce. Conversely, sometimes the work we want to do can’t be carried out with the tools we have. This nudges us to either look to other fields for inspiration or invent new tools altogether.

As a child, the architect Frank Gehry was fascinated with fish. This fascination carried through to his work. In the 1980s, Gehry started producing fish-shaped lamps, and eventually won a contract to produce a large fish-shaped sculpture for the 1992 Olympic Games in Barcelona.

Sculpture by Frank Gehry, Barcelona (1992.) Image by Till Niermann, CC BY-SA 3.0 via Wikimedia. (https://commons.m.wikimedia.org/wiki/File:Barcelona_Gehry_fish.jpg)
Sculpture by Frank Gehry, Barcelona (1992.) Image by Till Niermann, CC BY-SA 3.0 via Wikimedia.

Gehry’s team needed to figure out how the fish would be built. Traditional architectural drawings are best when describing buildings composed of flat planes and volumes, but this structure’s undulating surfaces were anything but. The standard tools of trade trade weren’t going to cut it.

One of Gehry’s collaborators suggested they look at a software tool called CATIA, which had been developed by French aerospace firm Dassault Systems for designing aircraft. CATIA allowed Gehry’s team to delegate the complex calculations to computers, and made the fish structure a reality.

CATIA also opened new possibilities for the firm — and the field of architecture more broadly. Buildings such as the Bilbao Guggenheim Museum and the Walt Disney Concert Hall in Los Angeles wouldn’t have been possible to design and build using traditional tools. Introducing new tools into the mix made a new type of building possible, and the field of architecture hasn’t been the same since.

Walt Disney Concert Hall in L.A., by Frank Gehry. Image by Visitor7, CC BY-SA 3.0 via Wikimedia. (https://commons.m.wikimedia.org/wiki/File:Walt_Disney_Concert_Hall-1.jpg)
Walt Disney Concert Hall in L.A., by Frank Gehry. Image by Visitor7, CC BY-SA 3.0 via Wikimedia.

When I look at the tools UX designers use, I mostly see software aimed at designing screen-based user interfaces. Applications such as Photoshop, Illustrator, and Sketch are excellent at rendering forms on flat screens, but not much more than that. This constrains possibilities; as the cliche says, if all you have is a hammer, everything starts to look like a nail… and these apps are all hammers.

We also lack tools for exploring the semantic structures and relationships that underpin information environments. The closest we come is whiteboards and diagramming apps such as Visio and OmniGraffle. I’ve met many taxonomists whose primary tool is Excel; software designed for manipulating numbers!

There are clear gaps in this space. It’s surprising, given that the focus of UX design is often software itself. Why haven’t we produced tools suited to the needs of designing information environments? Is it a matter of the market not being big enough? Or do they exist and I’m just not aware of them? What tools from other fields could we adopt to meet our needs?

Screen Thinking

The tools you use influence how you think about your work. When all you have is a hammer, everything looks like a nail. Consider the tools available to designers of information environments. Here are four that are representative:

Creating a new artboard in Sketch:

New artboard in Sketch

Creating a new file in Illustrator:

New file in Illustrator

Creating a new file in Photoshop:

New file in Photoshop

Creating a new file in Adobe Comp:

New file in Comp

The assumption in these apps (and others like them) is that the object you’ll be working on is a screen. This is understandable; these are apps we use to work on the visual design of user interfaces. However, there’s much more to UX design than just what things will look like.

How do you express the connections between screens? Is it easy for you to explore alternative relationships between objects in the system? What tools do you use to work on the structural and conceptual models of information environments?