The iPad As a Travel Computer

Long flights are one of the few contexts where I’m disconnected from the internet for a long period. As a result, I’m often very productive in airplanes. Much of this work happens on my iPad Pro. The iPad is light and compact and has a long battery life. It’s a perfect computer for working on a seat tray. I’ve even grown to like typing on its keyboard cover. And once I’m done with work, the iPad also doubles as a great entertainment device. All things told it’s a great little travel computer.

However, there’s one caveat to working on the iPad while flying: Doing so requires more planning than doing so with a regular laptop. In particular, I must always remember to download the stuff I want to work on to the device before getting on the plane.

In some crucial ways, the iPad functions more like a phone than like a laptop. I have lots of files I can call up at any time on my laptop. If I’m working on a presentation and want to copy a slide from an older deck, I look for the document and open it. Not so on my iPad; older files are usually in one of the various cloud services (Dropbox, Google Drive, iCloud, etc.) rather than on the device itself. This isn’t a problem on the ground; my iPad has a cell radio that keeps me connected to the internet everywhere. Except for airplanes, of course.

In this particular flight, I was planning to work on the slides for my WIAD Switzerland workshop. When I’d already boarded I thought to double-check that I had all the files I needed, and — sure enough — I was missing three of them. These are relatively large files, with lots of images. I started downloading them as the airplane was taxiing. The process became a race against time. I could see the download progress bars slowly nearing completion, download speeds varying as the airplane moved around. The files finished downloading a few minutes before we took off; I got everything I needed and was able to work on the slides during the flight. Still, it was stressful.

There are many advantages to being device-independent. It’s great to be able to work anywhere using any one of various computers, phones, tablets, etc. If any one of them dies or is stolen, it won’t take my work with it. Being device-independent also means being able to work from the device that’s best suited to current conditions. That said, being device-independent also means being network-dependent. It’s easy to become complacent about network access when we’re in our home region. That dependency can impair our effectiveness when we don’t have good connectivity, such as when we travel.

Book Notes: “Creative Selection”

Creative Selection: Inside Apple’s Design Process During the Golden Age of Steve Jobs
By Ken Kocienda
St. Martin’s Press, 2018

Twenty-one years ago, Apple co-founder Steve Jobs returned to lead the company after over a decade of board-imposed exile. How he rescued Apple—which was ninety days away from bankruptcy at the time—has become the stuff of legend. The role of design in that resuscitation is central to the story. As a result, design has a much higher prominence in today’s business world than it did a couple of decades ago. Apple is very secretive about its internal processes; even a small glimpse into how the company goes about designing its products and services would be very valuable.

Creative Selection’s subtitle promises to reveal the company’s product design process. And not just any product, but the most important one in the company’s history: the iPhone. (The author is introduced in the cover as Former Principal Engineer of iPhone Software at Apple.)

Mr. Kocienda acknowledges early on that there is no codified approach to design inside Apple:

Continue reading

Managing Screen Time

One of the best features of the most recent version of iOS is called Screen Time. It allows you to monitor and control what you do with your mobile devices and when. For example, you can find out how much time you’re spending on social media apps and whether your usage is increasing or decreasing. You can also set limits for yourself on the device overall or on a per-app basis. And if you use multiple iOS devices (such as an iPad and an iPhone) you can configure Screen Time to show you your behavior across all of them.

To access Screen Time, you must open the device’s Settings app. (This feels a bit incongruous. Although I understand this is an OS-level feature, it feels like something that should be independent of Settings. Anyways, I digress.) In the Settings app you’ll see an option for Screen Time:

If you tap on this menu item, you’ll be shown a screen that looks like this:

Continue reading

Developing a Mental Model of a System

In order to develop proficiency in a system, you must develop a mental model of how it works. This model must map to how the system is structured; you develop the model by interacting with the system. First impressions matter, but your understanding becomes more nuanced over time as you encounter different situations and conditions in the system. You also bring expectations to these interactions that influence your understanding. The degree to which your understanding becomes more accurate over time depends on how transparent the system is.

The Apple Watch serves as a good illustration. I’d never owned a smartwatch before buying mine, but I came to the experience of wearing a wrist-worn computer with expectations that were set by two devices that provided similar functionality: analog wristwatches and smartphones. From the former I brought assumptions about the Apple Watch’s timekeeping abilities and fit on the wrist, and from the latter expectations about a host of other features such as communication abilities, battery duration, legibility under various lighting conditions, how to access apps in the system, the fact there are apps at all, and so on.

In the first days after buying the Watch, I realized I had to adjust my model of how the device works. It wasn’t like my previous analog watch or my iPhone; some things were particular to this system that were very different from those other systems. For example, I had to learn a new way of launching apps. The starting point for most iPhone interactions is in a “home” screen that lists your apps. While the Watch also has a screen that lists your apps, that’s not where most interactions start; on the Watch, the starting point is your watch face. Watch faces can have “complications,” small widgets that show snippets of critical information. Tapping on a complication launches its related app. Thus, it makes sense to configure your favorite watch face with complications for the apps you use most frequently. This is a different conceptual model than the one offered by the analog watch or the smartphone.

After some time of using the Apple Watch, I now understand how it is structured, and how it works — at least when it comes to telling time and using applications. There’s an aspect of the system that still eludes me: which activities consume the most energy. For a small battery-powered computer like the Apple Watch, power management is crucial. Having your watch run out of power before your day is over can be annoying. This often happens to me, even after a few years of using this device. I’ve tried many things, but I still don’t know why some days end with 20% of battery left on the watch while others end with a dead watch before 5 pm. If the Apple Watch were more transparent in showing how it’s using power, I’d be better at managing its energy usage.

The tradeoff with making the system more transparent is that doing so can increase complexity for end users. I’m not sure I’d get more enjoyment from my Apple Watch if I knew how much energy each app was consuming. Designers abstract these things so that users don’t have to worry about them. As users, the best we can do is deduce causal relationships by trying different things. However, after three years of Apple Watch ownership,​ I still don’t understand how it manages power. The system is inscrutable to me. While this frustrates me, it’s not a deal breaker in the same way not grokking the system’s navigation would be. Not all parts of the system need to be understandable to the same degree.

The Allure of Novelty

It’s that time of year again: Tech companies are announcing new products in preparation for the holiday season. Over the past month, a slate of new phones, tablets, computers, and accessories have been announced. You may be considering buying one or more of these new devices. It’s worth thinking about whether or not you really need them.

As an Apple customer (and something of a gadget junkie), I’ve been intrigued by the new Apple Watch and the new iPad Pro. I already own earlier editions of both devices and was perfectly happy with them just a few months ago. But now I’m not. Now, when I look at my Apple Watch, I wonder: what if I could use it to play podcasts when I go running? What if its battery lasted the whole day? What if it was a little bit faster? What if… ? I know about the newer model, and can’t help but think about all the neat things it can do that mine can’t.

The iPad is a different story. While the new one looks quite nice, it’s not as clear to me how it would make my life better in ways the one I own can’t. Most of the new models’ features seem to be cosmetic: larger screens, smaller bezels, slightly different form factors, etc. Perhaps the new models are also a bit faster, but not in ways that would make much difference; my current iPad is plenty fast. The new Apple Pencil—the accessory I use most with the iPad—also looks much nicer than the old one, but seems functionally similar to the one I already own.

Would it be cool to have new devices for the holidays? Sure, it’d be fun. But it’s worth considering the tradeoffs that come with them. The most obvious, of course, is money. These things aren’t cheap! But there’s also the time they require: Time to research what to buy, time to set things up/migrate from older devices, time dealing with support if things go wrong. (I purchased a MacBook Pro earlier this year, and it’s already been back to Apple for service four times!) New tech can be quite a time sink.

How do you determine if the tradeoffs are worth it? For me, it comes down to figuring out whether I really need a new device or not. These questions help:

  • Does the new model enable me to do something I currently can’t?
  • Does the new model enable me to do something I can do now, but significantly faster or more efficiently?
  • Is there something I already own (or have access to) that could help me accomplish similar results, even if a little less conveniently?
  • Do I have the money/time to mess around with this stuff now? Or are there other things that require my money/attention with more urgency?
  • What do the new devices do worse than the old ones? (I.e., there are a few things about the new iPads that work better in the model I currently own!)
  • Am I using my current devices to the fullest of their capacity?

Novelty can be very alluring, especially during this time of year when advertising is in full force. But when I reflect upon these questions, I often realize that I may be better served by keeping my current devices longer and investing my time and money in other things.

Does USB-C Turn iPad Pros Into “Real” Computers?

This week Apple announced updated versions of their MacBook Air, Mac mini, and iPad Pro products. As a Mac and iPad user, I’ve been following the news with interest. I’m particularly intrigued by how the iPad is starting to blur the line between mobile devices and computers.

Every year the iPad receives software and hardware updates that make it more capable, allowing it to take up many of the jobs previously performed by more traditional laptop computers. (I’ve written previously about this transition.) However, this year’s iPad models feature an interesting design choice that marks a milestone in this transition: they lose the Lightning port that’s been central to iOS devices (such as iPads and iPhones) for the past six years. The Lightning port is how you connect charging cables and other devices to iPads. In its stead, the new models feature a USB-C port like Mac laptops do.

I was excited when I read about this. My first thought was that I’d be able to pare down my dongle collection. I currently travel with an iPad Pro (which uses Lightning) and a MacBook Pro (which uses USB-C.) I have various peripherals and charging cables for both devices. If they both computers used the same port type, perhaps I’d be able to pare down my travel kit. More intriguingly, maybe USB-C would allow the iPad to connect to more peripherals such as external USB drives. The Lightning port on current iPads and iPhones is constrained in ways that close it off from using these types of devices.

However, the reports I’m seeing suggest Apple is constraining the new iPad Pro’s USB-C port in similar ways to the Lightning port. For example, while the new iPads will be able to use external displays more effectively than previous models, they still won’t be able to use external drives. So what’s the point to the switch? It’s not like Lightning is going away in the near-term; it’s still used on the other (non-pro) iPads, iPhones, and peripherals such as AirPods. Lightning is going to be around for a while. While it’s convenient to be able to share peripherals and charging cables with Macs, it may be even more convenient to do so with iPhones.

I sense that while there may indeed be practical engineering reasons for the change to USB-C on iPads, there’s also an ulterior motive: The new port helps set the iPad Pro apart from Apple’s other mobile devices as a “serious” computing device. It’s a sign that the iPad is no longer an oversized iPhone, but a device in a category of its own — one that’s getting ever closer to becoming a “real” productivity device for many users.

Mobile Computing at a Different Level

There are many ways in which people use computers. (I’m talking about all sorts of computers here, including smartphones and tablets.) Some people’s needs are very simple; they may use the machines merely to stay connected with their world. Other people have more complex needs. You can organize these various uses on a continuum that ranges from least to most powerful. Consider at least three levels:

  1. Accessing Content: Computer is used primarily to find information on the internet. Users at this level interact with others through email or social networks, but do so lightly. They spend the bulk of their on-device time accessing content created by others. Many casual users are on this level; it’s also where they have the least power.
  2. Creating Content: In addition to the previous level, computer is also used as a content-creation device. While users at this level may spend a considerable amount of time accessing content created by others, they also produce lots of content of their own. Many people who use computers professionally are on this level.
  3. Tweaking Workflows: In addition to the previous two levels, the computer is also used to modify how the computer works. This includes enabling new workflows through programming or scripting. This level affords most users the most power.

(There’s an even higher level, which is closer to the machine’s metal and affords a very small subset of people tremendous power. I’m not going to get into that here; I’m concerned with how most of us interact with these devices.)

Consider a transportation analogy. On level one, you are a passenger in public transport. On level two, you are driving a car. On level three, you are a mechanic, capable of making modifications to the vehicle to fix it or improve its performance. As with transportation, the higher the level, the more complexity the user must deal with.

Which level are you? If you’re like most people, your at either levels 1 or 2. This is OK; very few people take advantage of level 3. Learning to program requires great effort, and for most uses the payoff may seem to not be worth the investment of time required.

I was around eight years old when I first interacted with a computer: a TRS-80 Model I. As with most machines of this vintage (late 1970s), when you sat down in front of a Model I you were greeted by a command prompt:

Image: https://picclick.com/TRS-80-Radio-Shack-Tandy-Model-1-Video-Display-323191180180.html
Image: https://picclick.com/TRS-80-Radio-Shack-Tandy-Model-1-Video-Display-323191180180.html

The computer could do very little on its own. You needed to give it commands, most often in the BASIC programming language. (Which incidentally just turned 50.) So level 3 was the baseline for using computers at this time. We’ve come a long way since then. Most computers are now like appliances; you don’t need to know much about how they work under the hood in order to take advantage of them. However, knowing even a little bit about how they work can grant you superpowers.

Level 3 has come a long way from the days of the TRS-80. I’ve been messing around with the new Shortcuts functionality in iOS 12, and am very impressed with how easy it is to string together several apps to accomplish new things. For example, the Home ETA shortcut strings together the Apple Maps, Contacts, and Messages apps. When you install the shortcut, you configure it with your home’s street address and the contact information of the person you want notified. When you activate the shortcut (which you can do through various means, including an icon on your home screen), Apple Maps figures out your current location and uses it to calculate how far you are from home. Then it passes that information to Messages, which then sends your estimated time of arrival to your selected contact.

It’s not a mind-blowing functionality, but the fact that iPhones and iPads can do this at all is impressive. iOS users can now create arbitrary connections between components of the system, opening up possibilities that were previously difficult or impossible. Shortcuts also promises to make these much better as productivity tools. It’s the old Unix “small pieces loosely joined” philosophy — but in a platform designed to be less of a computer than an appliance. It opens up level 3 possibilities for level 1 and 2 users, without asking that they become mechanics.

Steve Jobs: The Lost Interview

Yesterday on a cross-country flight I had the opportunity to watch STEVE JOBS: THE LOST INTERVIEW, a documentary recorded in 1995 and released to theaters shortly after Jobs’s death in 2011. As its name implies, the film consists of an interview Robert X. Cringely conducted with Jobs for THE TRIUMPH OF THE NERDS, a PBS documentary about the development of the personal computer. Footage from the interview was lost for a while, but resurfaced after Jobs’s death.

The film shows Jobs at an interesting time in his life. This was before his triumphant return to Apple, which was then at its nadir. At this point, the company Jobs founded after leaving Apple (NeXT) had already transitioned from making computers to making software. It’s fascinating to see him frame this development; when talking about NeXT, he doesn’t mention the company’s computers at all. Instead, he talks about object-oriented programming as one of three major advances he witnessed in a visit to Xerox PARC in the late 1970s; the other two being ethernet networking and the graphical user interface. The latter of these, of course, is what led to the development of the Mac. In this way, Jobs ties his past success with his (then) current endeavor. Jobs is very clear on the lineage of these technologies; he doesn’t claim to have invented any of them. (At one point he even cites Picasso’s famous quote, “good artists copy; great artists steal.”)

Continue reading

It’s Not Complicated

When he introduced the iPhone 7 in 2016, Apple executive Phil Schiller described the company’s decision to remove the phone’s headphone jack as “courageous.” While some people mocked this assertion, Schiller’s point is valid: Apple often makes bold decisions and sticks by them even when they may be unpopular (as with the headphone jack.)

This courage doesn’t just come across in the design of Apple’s products and their features; it’s also sometimes evident in the language the company uses to describe them. Remember when the iPad was first announced? This was a time when Apple still had a product in their lineup called iPod; the name iPad lent itself to confusion. I remember stumbling at first when trying to talk about the device. Now the name iPad feels natural. Some people call all tablets iPads, even the ones produced by other manufacturers. It’s become the name of the form factor itself, not just the product. Apple pulled off a coup with that label, a testament to the power of their marketing.

More recently, the company has made another bold naming choice. I’m talking about the word they’ve chosen to describe how users add functionality to watch faces in the Apple Watch. You can’t just say “the space in the watch face where you can see the temperature.” Too clunky. At some point, a team at Apple had to discuss giving these things a name. The word they chose? Complications.

Continue reading