Managing Screen Time

One of the best features of the most recent version of iOS is called Screen Time. It allows you to monitor and control what you do with your mobile devices and when. For example, you can find out how much time you’re spending on social media apps and whether your usage is increasing or decreasing. You can also set limits for yourself on the device overall or on a per-app basis. And if you use multiple iOS devices (such as an iPad and an iPhone) you can configure Screen Time to show you your behavior across all of them.

To access Screen Time, you must open the device’s Settings app. (This feels a bit incongruous. Although I understand this is an OS-level feature, it feels like something that should be independent of Settings. Anyways, I digress.) In the Settings app you’ll see an option for Screen Time:

If you tap on this menu item, you’ll be shown a screen that looks like this:

Continue reading

Developing a Mental Model of a System

In order to develop proficiency in a system, you must develop a mental model of how it works. This model must map to how the system is structured; you develop the model by interacting with the system. First impressions matter, but your understanding becomes more nuanced over time as you encounter different situations and conditions in the system. You also bring expectations to these interactions that influence your understanding. The degree to which your understanding becomes more accurate over time depends on how transparent the system is.

The Apple Watch serves as a good illustration. I’d never owned a smartwatch before buying mine, but I came to the experience of wearing a wrist-worn computer with expectations that were set by two devices that provided similar functionality: analog wristwatches and smartphones. From the former I brought assumptions about the Apple Watch’s timekeeping abilities and fit on the wrist, and from the latter expectations about a host of other features such as communication abilities, battery duration, legibility under various lighting conditions, how to access apps in the system, the fact there are apps at all, and so on.

In the first days after buying the Watch, I realized I had to adjust my model of how the device works. It wasn’t like my previous analog watch or my iPhone; some things were particular to this system that were very different from those other systems. For example, I had to learn a new way of launching apps. The starting point for most iPhone interactions is in a “home” screen that lists your apps. While the Watch also has a screen that lists your apps, that’s not where most interactions start; on the Watch, the starting point is your watch face. Watch faces can have “complications,” small widgets that show snippets of critical information. Tapping on a complication launches its related app. Thus, it makes sense to configure your favorite watch face with complications for the apps you use most frequently. This is a different conceptual model than the one offered by the analog watch or the smartphone.

After some time of using the Apple Watch, I now understand how it is structured, and how it works — at least when it comes to telling time and using applications. There’s an aspect of the system that still eludes me: which activities consume the most energy. For a small battery-powered computer like the Apple Watch, power management is crucial. Having your watch run out of power before your day is over can be annoying. This often happens to me, even after a few years of using this device. I’ve tried many things, but I still don’t know why some days end with 20% of battery left on the watch while others end with a dead watch before 5 pm. If the Apple Watch were more transparent in showing how it’s using power, I’d be better at managing its energy usage.

The tradeoff with making the system more transparent is that doing so can increase complexity for end users. I’m not sure I’d get more enjoyment from my Apple Watch if I knew how much energy each app was consuming. Designers abstract these things so that users don’t have to worry about them. As users, the best we can do is deduce causal relationships by trying different things. However, after three years of Apple Watch ownership,​ I still don’t understand how it manages power. The system is inscrutable to me. While this frustrates me, it’s not a deal breaker in the same way not grokking the system’s navigation would be. Not all parts of the system need to be understandable to the same degree.

The Allure of Novelty

It’s that time of year again: Tech companies are announcing new products in preparation for the holiday season. Over the past month, a slate of new phones, tablets, computers, and accessories have been announced. You may be considering buying one or more of these new devices. It’s worth thinking about whether or not you really need them.

As an Apple customer (and something of a gadget junkie), I’ve been intrigued by the new Apple Watch and the new iPad Pro. I already own earlier editions of both devices and was perfectly happy with them just a few months ago. But now I’m not. Now, when I look at my Apple Watch, I wonder: what if I could use it to play podcasts when I go running? What if its battery lasted the whole day? What if it was a little bit faster? What if… ? I know about the newer model, and can’t help but think about all the neat things it can do that mine can’t.

The iPad is a different story. While the new one looks quite nice, it’s not as clear to me how it would make my life better in ways the one I own can’t. Most of the new models’ features seem to be cosmetic: larger screens, smaller bezels, slightly different form factors, etc. Perhaps the new models are also a bit faster, but not in ways that would make much difference; my current iPad is plenty fast. The new Apple Pencil—the accessory I use most with the iPad—also looks much nicer than the old one, but seems functionally similar to the one I already own.

Would it be cool to have new devices for the holidays? Sure, it’d be fun. But it’s worth considering the tradeoffs that come with them. The most obvious, of course, is money. These things aren’t cheap! But there’s also the time they require: Time to research what to buy, time to set things up/migrate from older devices, time dealing with support if things go wrong. (I purchased a MacBook Pro earlier this year, and it’s already been back to Apple for service four times!) New tech can be quite a time sink.

How do you determine if the tradeoffs are worth it? For me, it comes down to figuring out whether I really need a new device or not. These questions help:

  • Does the new model enable me to do something I currently can’t?
  • Does the new model enable me to do something I can do now, but significantly faster or more efficiently?
  • Is there something I already own (or have access to) that could help me accomplish similar results, even if a little less conveniently?
  • Do I have the money/time to mess around with this stuff now? Or are there other things that require my money/attention with more urgency?
  • What do the new devices do worse than the old ones? (I.e., there are a few things about the new iPads that work better in the model I currently own!)
  • Am I using my current devices to the fullest of their capacity?

Novelty can be very alluring, especially during this time of year when advertising is in full force. But when I reflect upon these questions, I often realize that I may be better served by keeping my current devices longer and investing my time and money in other things.

Does USB-C Turn iPad Pros Into “Real” Computers?

This week Apple announced updated versions of their MacBook Air, Mac mini, and iPad Pro products. As a Mac and iPad user, I’ve been following the news with interest. I’m particularly intrigued by how the iPad is starting to blur the line between mobile devices and computers.

Every year the iPad receives software and hardware updates that make it more capable, allowing it to take up many of the jobs previously performed by more traditional laptop computers. (I’ve written previously about this transition.) However, this year’s iPad models feature an interesting design choice that marks a milestone in this transition: they lose the Lightning port that’s been central to iOS devices (such as iPads and iPhones) for the past six years. The Lightning port is how you connect charging cables and other devices to iPads. In its stead, the new models feature a USB-C port like Mac laptops do.

I was excited when I read about this. My first thought was that I’d be able to pare down my dongle collection. I currently travel with an iPad Pro (which uses Lightning) and a MacBook Pro (which uses USB-C.) I have various peripherals and charging cables for both devices. If they both computers used the same port type, perhaps I’d be able to pare down my travel kit. More intriguingly, maybe USB-C would allow the iPad to connect to more peripherals such as external USB drives. The Lightning port on current iPads and iPhones is constrained in ways that close it off from using these types of devices.

However, the reports I’m seeing suggest Apple is constraining the new iPad Pro’s USB-C port in similar ways to the Lightning port. For example, while the new iPads will be able to use external displays more effectively than previous models, they still won’t be able to use external drives. So what’s the point to the switch? It’s not like Lightning is going away in the near-term; it’s still used on the other (non-pro) iPads, iPhones, and peripherals such as AirPods. Lightning is going to be around for a while. While it’s convenient to be able to share peripherals and charging cables with Macs, it may be even more convenient to do so with iPhones.

I sense that while there may indeed be practical engineering reasons for the change to USB-C on iPads, there’s also an ulterior motive: The new port helps set the iPad Pro apart from Apple’s other mobile devices as a “serious” computing device. It’s a sign that the iPad is no longer an oversized iPhone, but a device in a category of its own — one that’s getting ever closer to becoming a “real” productivity device for many users.

Mobile Computing at a Different Level

There are many ways in which people use computers. (I’m talking about all sorts of computers here, including smartphones and tablets.) Some people’s needs are very simple; they may use the machines merely to stay connected with their world. Other people have more complex needs. You can organize these various uses on a continuum that ranges from least to most powerful. Consider at least three levels:

  1. Accessing Content: Computer is used primarily to find information on the internet. Users at this level interact with others through email or social networks, but do so lightly. They spend the bulk of their on-device time accessing content created by others. Many casual users are on this level; it’s also where they have the least power.
  2. Creating Content: In addition to the previous level, computer is also used as a content-creation device. While users at this level may spend a considerable amount of time accessing content created by others, they also produce lots of content of their own. Many people who use computers professionally are on this level.
  3. Tweaking Workflows: In addition to the previous two levels, the computer is also used to modify how the computer works. This includes enabling new workflows through programming or scripting. This level affords most users the most power.

(There’s an even higher level, which is closer to the machine’s metal and affords a very small subset of people tremendous power. I’m not going to get into that here; I’m concerned with how most of us interact with these devices.)

Consider a transportation analogy. On level one, you are a passenger in public transport. On level two, you are driving a car. On level three, you are a mechanic, capable of making modifications to the vehicle to fix it or improve its performance. As with transportation, the higher the level, the more complexity the user must deal with.

Which level are you? If you’re like most people, your at either levels 1 or 2. This is OK; very few people take advantage of level 3. Learning to program requires great effort, and for most uses the payoff may seem to not be worth the investment of time required.

I was around eight years old when I first interacted with a computer: a TRS-80 Model I. As with most machines of this vintage (late 1970s), when you sat down in front of a Model I you were greeted by a command prompt:

Image: https://picclick.com/TRS-80-Radio-Shack-Tandy-Model-1-Video-Display-323191180180.html
Image: https://picclick.com/TRS-80-Radio-Shack-Tandy-Model-1-Video-Display-323191180180.html

The computer could do very little on its own. You needed to give it commands, most often in the BASIC programming language. (Which incidentally just turned 50.) So level 3 was the baseline for using computers at this time. We’ve come a long way since then. Most computers are now like appliances; you don’t need to know much about how they work under the hood in order to take advantage of them. However, knowing even a little bit about how they work can grant you superpowers.

Level 3 has come a long way from the days of the TRS-80. I’ve been messing around with the new Shortcuts functionality in iOS 12, and am very impressed with how easy it is to string together several apps to accomplish new things. For example, the Home ETA shortcut strings together the Apple Maps, Contacts, and Messages apps. When you install the shortcut, you configure it with your home’s street address and the contact information of the person you want notified. When you activate the shortcut (which you can do through various means, including an icon on your home screen), Apple Maps figures out your current location and uses it to calculate how far you are from home. Then it passes that information to Messages, which then sends your estimated time of arrival to your selected contact.

It’s not a mind-blowing functionality, but the fact that iPhones and iPads can do this at all is impressive. iOS users can now create arbitrary connections between components of the system, opening up possibilities that were previously difficult or impossible. Shortcuts also promises to make these much better as productivity tools. It’s the old Unix “small pieces loosely joined” philosophy — but in a platform designed to be less of a computer than an appliance. It opens up level 3 possibilities for level 1 and 2 users, without asking that they become mechanics.

Steve Jobs: The Lost Interview

Yesterday on a cross-country flight I had the opportunity to watch STEVE JOBS: THE LOST INTERVIEW, a documentary recorded in 1995 and released to theaters shortly after Jobs’s death in 2011. As its name implies, the film consists of an interview Robert X. Cringely conducted with Jobs for THE TRIUMPH OF THE NERDS, a PBS documentary about the development of the personal computer. Footage from the interview was lost for a while, but resurfaced after Jobs’s death.

The film shows Jobs at an interesting time in his life. This was before his triumphant return to Apple, which was then at its nadir. At this point, the company Jobs founded after leaving Apple (NeXT) had already transitioned from making computers to making software. It’s fascinating to see him frame this development; when talking about NeXT, he doesn’t mention the company’s computers at all. Instead, he talks about object-oriented programming as one of three major advances he witnessed in a visit to Xerox PARC in the late 1970s; the other two being ethernet networking and the graphical user interface. The latter of these, of course, is what led to the development of the Mac. In this way, Jobs ties his past success with his (then) current endeavor. Jobs is very clear on the lineage of these technologies; he doesn’t claim to have invented any of them. (At one point he even cites Picasso’s famous quote, “good artists copy; great artists steal.”)

Continue reading

It’s Not Complicated

When he introduced the iPhone 7 in 2016, Apple executive Phil Schiller described the company’s decision to remove the phone’s headphone jack as “courageous.” While some people mocked this assertion, Schiller’s point is valid: Apple often makes bold decisions and sticks by them even when they may be unpopular (as with the headphone jack.)

This courage doesn’t just come across in the design of Apple’s products and their features; it’s also sometimes evident in the language the company uses to describe them. Remember when the iPad was first announced? This was a time when Apple still had a product in their lineup called iPod; the name iPad lent itself to confusion. I remember stumbling at first when trying to talk about the device. Now the name iPad feels natural. Some people call all tablets iPads, even the ones produced by other manufacturers. It’s become the name of the form factor itself, not just the product. Apple pulled off a coup with that label, a testament to the power of their marketing.

More recently, the company has made another bold naming choice. I’m talking about the word they’ve chosen to describe how users add functionality to watch faces in the Apple Watch. You can’t just say “the space in the watch face where you can see the temperature.” Too clunky. At some point, a team at Apple had to discuss giving these things a name. The word they chose? Complications.

Continue reading

The Tricorder on Your Wrist

I bought the first generation Apple Watch (colloquially known as the “Series 0”) when it came out. Doing so was a measured leap of faith; it wasn’t entirely clear to me at the time what the Watch was for. Most of its features were things I could already do with my iPhone, albeit a bit less conveniently. Track my runs? Check. Show notifications? Check. Play music? Check. Tell the time? Check. Then there was the inconvenience of having another device to charge and the expense of periodic hardware upgrades.

Still, as a digital designer and strategist, it’s important for me to be up to date on form factors and technologies. I also trust Apple. So I bought the watch and went all in, using it daily to track my activity. Although I’ve grown to really like the Apple Watch, I haven’t seen it as an essential part of my everyday carry kit like the iPhone is. I can easily make it through a day without my watch, which is not something I can readily say about my phone.

To Apple’s credit, they’ve improved the product tremendously over the past three years. (Sometimes by making major changes to fundamental interactions in its operating system, which was somewhat awkward at launch.) Even though it’s rather slow now, and its battery doesn’t last as long as it used to, my Watch is better today than when I bought it. (A notable example: I use it dozens of times every day to automatically log into my Mac, a time saver.) Apple has also released subsequent iterations of the hardware that have added significant improvements such as GPS tracking and a cell radio. Still, I’ve resisted the impulse to upgrade. The Watch is not an inexpensive purchase (I prefer the stainless steel models), and as said above, I haven’t thought of it as indispensable.

Continue reading

Controlling Screen Time

Yesterday Apple presented in public the 2018 updates of its operating systems. As happens every year, we got a glimpse of many new software features coming to the Mac, iPads, Apple Watches, Apple TVs, and iPhones. One feature coming to iOS — the system that runs iPhones and iPads — stands out not because of things it allows us to do with our devices, but because of what it doesn’t allow: to consume our time mindlessly with them.

The new feature, called Screen Time, allows users to examine the time they’ve spent using apps and websites, and set constraints on that time. For example, somebody could decide she only wanted to spend a maximum of thirty minutes every day using the Instagram app on her phone. The phone would keep track of the time she spends on the app, notify her when she was approaching her limit, and ultimately turn off access to the app altogether when she exceeded her allotted time. She could do this not just for herself, but also for her kids.

Apple is not the first to do this; Google has announced similar features for Android as part of its Digital Wellbeing program, and there are also third-party apps that accomplish similar goals. That said, Apple’s announcement is significant because of the company’s cultural pull and the prominence they’re giving this feature in their flagship OS.

Three thoughts come to mind right away. The first is that the existence of this feature is an acknowledgment that something is not right with the way we’re currently using our devices. The time you spend engaged with information environments comes at the expense of the time you spend engaged in your physical environment. When companies compete with each other for your attention, and you have a device with you that gives you instant access to all of them at any time, a race ensues in which you and your loved ones lose. By establishing “digital wellbeing” and “digital health” (Apple’s phrase) programs, the operating system vendors are admitting that this has become a problem.

The second thought is that as platform vendors, neither Google or Apple can directly control the form of the information environments their systems host; what they can control is the amount of time users can spend in those environments. You can think of the OS vendors as managing cities. Formerly, the city’s spaces — parks, buildings — were open 24×7, but now they can have operating hours. This is especially useful when some of the buildings contain casinos; some folks need a nudge to go home and sleep once in a while.

The third thought is that the OS vendors are giving users the tools to examine their behavior in these environments and the power to define their operating hours for themselves. This gives us as individuals the ability to engage more consciously with the information environments where we spend our time. I hope the push towards providing us more control over our attention will help steer companies away from business models that drive us towards continuous engagement.

I see the development of the platform vendors’ digital wellbeing initiatives as an encouraging sign. That said, it doesn’t relieve the organizations that design and produce websites and apps from the responsibility of ensuring those environments support the needs and aspirations of their users and society at large. Ideally the most addictive of these digital places will now look for ways to better align their business goals with the goals of their users.