Does USB-C Turn iPad Pros Into “Real” Computers?

This week Apple announced updated versions of their MacBook Air, Mac mini, and iPad Pro products. As a Mac and iPad user, I’ve been following the news with interest. I’m particularly intrigued by how the iPad is starting to blur the line between mobile devices and computers.

Every year the iPad receives software and hardware updates that make it more capable, allowing it to take up many of the jobs previously performed by more traditional laptop computers. (I’ve written previously about this transition.) However, this year’s iPad models feature an interesting design choice that marks a milestone in this transition: they lose the Lightning port that’s been central to iOS devices (such as iPads and iPhones) for the past six years. The Lightning port is how you connect charging cables and other devices to iPads. In its stead, the new models feature a USB-C port like Mac laptops do.

I was excited when I read about this. My first thought was that I’d be able to pare down my dongle collection. I currently travel with an iPad Pro (which uses Lightning) and a MacBook Pro (which uses USB-C.) I have various peripherals and charging cables for both devices. If they both computers used the same port type, perhaps I’d be able to pare down my travel kit. More intriguingly, maybe USB-C would allow the iPad to connect to more peripherals such as external USB drives. The Lightning port on current iPads and iPhones is constrained in ways that close it off from using these types of devices.

However, the reports I’m seeing suggest Apple is constraining the new iPad Pro’s USB-C port in similar ways to the Lightning port. For example, while the new iPads will be able to use external displays more effectively than previous models, they still won’t be able to use external drives. So what’s the point to the switch? It’s not like Lightning is going away in the near-term; it’s still used on the other (non-pro) iPads, iPhones, and peripherals such as AirPods. Lightning is going to be around for a while. While it’s convenient to be able to share peripherals and charging cables with Macs, it may be even more convenient to do so with iPhones.

I sense that while there may indeed be practical engineering reasons for the change to USB-C on iPads, there’s also an ulterior motive: The new port helps set the iPad Pro apart from Apple’s other mobile devices as a “serious” computing device. It’s a sign that the iPad is no longer an oversized iPhone, but a device in a category of its own — one that’s getting ever closer to becoming a “real” productivity device for many users.

Innovation and the Test of Time

Innovations are important; they generate growth, increase productivity, and improve our lives. Many emergent technologies give us incredible new abilities. Because of this, we apportion much attention (and money!) to innovative products. However, this focus on innovation can lead us to discount things that work well and have done so for a long time. New things often come with tradeoffs — especially when a technology is very new. If we focus exclusively on the new abilities they give us, we’ll be more willing to overlook their downsides and discount the value of older things that do less — but often do it better.

For example, an Apple Watch does things a mechanical watch can’t: It can show you notifications, receive and make phone calls, keep track of workouts, unlock your computer, and alert emergency services if you fall. Those are all superpowers that can make your life better. However, they come with tradeoffs. For one thing, it’s not entirely clear some of them do make your life better. An always-on device that shows you notifications may keep you from focusing. For another, an Apple Watch isn’t really “always on;” the battery on mine barely makes it past 9 pm on most days. It’s another device to keep charged, another cable to track.

Continue reading


One of the most frequent objections I hear about approaching design work more architecturally is that architecture is “top-down.” By this, my interlocutor usually means that architects come to problems with a prescribed solution that they impose onto the situation, In contrast, of course, to a solution that emerges more fluidly from understanding the context and people served by the thing being designed.

It’s understandable that they’d come to this conclusion since many of the famous architects people know about produce work that doesn’t look intuitive or contextually relevant. It’s hard to see, for example, how Frank Gehry’s Guggenheim Museum in Bilbao is the result of a user-centered design approach. The worst offender here is perhaps Le Corbusier, whose urban Plan Voisin for Paris would’ve razed large portions of the city in exchange for a de-humanizing grid of skyscrapers:

Continue reading

How to Compromise a Product Vision

Great products start with a vision. Somebody — perhaps a small group of people — has an idea to change how something works in the world. On its way to becoming a real thing, the team tweaks and adjusts the idea; they make small compromises to the laws of physics, market demands, manufacturing constraints, user feedback, and so on. In the process, the idea goes from a “perfect” imagining of the vision to a pretty good embodiment that can be used by people in the real world.

At least that’s the ideal. However, sometimes a product changes so much that its original vision becomes compromised. One of the best examples I’ve seen of this happened to one of the attractions in the Magic Kingdom theme park at Walt Disney World: Walt Disney’s Carousel of Progress. This is one of the few Disney attractions that have Walt’s name on them. There’s a good reason for this. The Carousel was the highest expression of his particular genius: using new technologies to convey big ideas to the masses in ways that they could connect to at an emotional level. Some people say it was his favorite attraction.

Continue reading

Making the Place Your Own

Think about the place where you live. Is it a house? An apartment? A room in a dormitory? Wherever it is, the odds are high that you live inside a structure designed by someone for that purpose. By “that purpose,” I mean being inhabited by people — generic people, not just you as an individual. (While some individuals can afford to have their living places designed just for them, from the ground up, this is not the norm. Most of us live in buildings that were designed for somebody else or nobody at all; for “people,” in general.)

These structures include distinct spaces. Some, like the toilets and kitchen, are prescriptive: they’re designed to accommodate specific uses. We may do other things in these spaces, but they were designed with a primary use in mind; they satisfy broad needs you shared with other people in your culture. Other spaces in the house are more generic. For example, a garage can serve as storage for a car, a space for writing, or the birthplace of the world’s most valuable company.

When you move into a house or apartment, you begin a gradual process of making this generic environment your own. At first, the place is still unfamiliar. You may wonder, “Where was it that I put the cutlery?” You open several drawers… “Ah yes, there it is!” Little by little, you find places to store your stuff, bring in furniture and arrange it in ways that suit you, hang art on the walls, etc. You customize the environment, adapting its structures to your needs. Eventually, you don’t need to look for the cutlery — you just know where it is. The place becomes familiar, expectable, usable, perhaps even a little boring.

Continue reading

Not an Optimist

I’ve written earlier about the importance of being optimistic. I still think it’s important to have an optimistic outlook. However, I recently came across a description that better captures the position I aspire to. It’s in Hans Rosling’s great book Factfulness: Ten Reasons We’re Wrong About the World — and Why Things Are Better Than You Think:

“I’m not an optimist. That makes me sound naïve. I’m a very serious ‘possibilist.’ That’s something I made up. It means someone who neither hopes without reason, nor fears without reason, someone who constantly resists the overdramatic worldview. As a possibilist, I see all this progress, and it fills me with conviction and hope that further progress is possible. This is not optimistic. It is having a clear and reasonable idea about how things are. It is having a worldview that is constructive and useful.”

I like the distinction Rosling draws between optimism and “possibilism.” For many people, optimism implies keeping a sunny disposition in spite of (or in ignorance of) the facts. That’s not healthy. What we want is a “clear and reasonable idea of how things are.” Seeing clearly, free from distortions.

The progress Rosling is alluding to is the subject of the book: factual data that shows how, overall, things have been getting better for humanity and for the world over time. If this sounds counter-intuitive, it’s because of several cognitive biases that affect how we understand reality (and which Rosling skillfully dismantles.)

Our effectiveness as designers (or citizens, or co-workers, or parents, or…) requires that we understand these biases and how they influence how we perceive things (and therefore, how we act.) How can you propose any kind of intervention into a system or situation when you don’t yet have a “clear and reasonable” understanding of it?

That’s why research is so important to design. But research is not enough. If you’re trying to see clearly something that is very small or very large or very distant, you must have the right instruments, and have them in proper working order. But much also depends on whether you know which instruments are called for to begin with, how to configure them, how to point them in the right direction, and what the “right” direction is. This requires that you be in proper working order — among other things, free from an “overdramatic worldview.”

Working With Ambiguity

Design requires comfort with ambiguity; making progress even when requirements are unclear, uncertain, or unspecified. Good designers are unfazed by lack of clarity, without being foolhardy. They understand that their job is to make the possible tangible. If possibilities were already evident, there would be no need for their help; others would simply make the thing.

But possibilities are never definite. Nobody has perfect clairvoyance. Stakeholders discuss the new thing conceptually, but what will it actually be? They don’t know. Yes, it’ll be a user interface for a new medical imaging system. But that statement is an abstraction. There are hundreds — if not thousands — of decisions to be made before such a thing is concrete enough to be built. Making those decisions is the part of the designer’s remit.

Not that they’re ultimately the designer’s responsibility; stakeholders must ultimately decide whether or not the designer’s choices meet requirements. (The logo may indeed need to be bigger.) Articulating the concept with artifacts that help stakeholders understand what they’re actually talking about is, by definition, an act of reducing ambiguity.

Making sense of ambiguous situations requires having the right attitude. It calls for self-confidence, playfulness, and entrepreneurial drive. Although these traits can be improved, they come more naturally to some designers than others. Some folks are less willing than others to be made vulnerable.

That said, working successfully with ambiguity is not just about attitude; context also plays an important part. The problem with uncertainty is that you may get things wrong; the thing you produce may be partially (or wholly) inadequate. Time is lost. Money is lost. What then? What are the consequences?

Some project environments are more tolerant of mistakes than others. Because they’re the ones making things tangible and they often lack political power in their organizations, designers can easily become scapegoats for bad directions. Environments that punish mistakes will make exploration difficult.

Some problem domains also lend themselves more to making mistakes than others. The consequences for failing to capture the essence of a new brand are different than the consequences for failing to keep a bridge upright. It’s more challenging to deal with ambiguity when designing high-stakes systems, such as those that put lives are at risk.

Ultimately, design calls for working with ambiguity. This requires a combination of the right attitude within the right context. When considering your work, how easy is it for you to deal with unclear or uncertain directions? What are the consequences of getting things wrong? And more importantly, what can you do about these things?

The End of Engagement

Mobile operating system vendors are starting to give us the ability to become more aware of (and limit) the time we spend using our devices. For example, the Screen Time feature in Apple’s iOS 12 will make it possible for users of iPhones and iPads to define how long they want to spend using specific apps or entire app categories.

If adopted widely, these capabilities will impact the way many information environments are designed. Today, many apps and websites are structured to increase the engagement of their users. This is especially true of environments that are supported by advertising since the more time people spend in them translates directly to more exposure, and hence more money.

The novelty of always-connected supercomputers in our pockets at all times has fostered a cavalier attitude towards how we apportion our attention when in the presence of these things. The time we spend online has more than doubled over the past decade.

As digital designers, we have the responsibility to question the desirability of using engagement as the primary measure of success for our information environments. While it may be appropriate for some cases, engagement is overused today. This is because engagement is easy to measure, easy to design for, and in many cases (such as advertising,) it translates directly to higher revenues.

But the drive towards user engagement is a losing proposition. It’s a zero-sum game; you have a limited amount of time in the day — and ultimately, in your life as a whole. Whatever time you spend in one app will come at the expense of time spent engaging with other apps — or worse, spent engaging with other people in your life. Google and Apple’s “digital wellbeing” and “digital health” initiatives are an admission that this has become an issue for many people. With time, we will become more sophisticated about the tradeoffs we’re making when we enter these environments.

So if not engagement, what should we be designing for? My drive is towards designing for alignment between the goals of the user, the organization, and society. When your goals are aligned with the goals your environment is designed to support, you will be more willing to devote your precious time to it. You will enter the environment consciously, do what you need to do there, and then move on to something else. You’ll aim for “quality time” in the environment, rather than the information benders that are the norm today.

Designing for alignment is both subtler and more difficult than designing for engagement. It’s not as easy to measure progress or ROI on alignment. It also requires a deeper understanding of people’s motivations and having a clear perspective on how our business can contribute to social well-being. It’s a challenge that requires that we take design to another level at a time when design is just beginning to hit its stride within organizations. But we must do it. Only through alignment can we create the conditions that produce sustainable value for everyone in the long term.

Controlling Screen Time

Yesterday Apple presented in public the 2018 updates of its operating systems. As happens every year, we got a glimpse of many new software features coming to the Mac, iPads, Apple Watches, Apple TVs, and iPhones. One feature coming to iOS — the system that runs iPhones and iPads — stands out not because of things it allows us to do with our devices, but because of what it doesn’t allow: to consume our time mindlessly with them.

The new feature, called Screen Time, allows users to examine the time they’ve spent using apps and websites, and set constraints on that time. For example, somebody could decide she only wanted to spend a maximum of thirty minutes every day using the Instagram app on her phone. The phone would keep track of the time she spends on the app, notify her when she was approaching her limit, and ultimately turn off access to the app altogether when she exceeded her allotted time. She could do this not just for herself, but also for her kids.

Apple is not the first to do this; Google has announced similar features for Android as part of its Digital Wellbeing program, and there are also third-party apps that accomplish similar goals. That said, Apple’s announcement is significant because of the company’s cultural pull and the prominence they’re giving this feature in their flagship OS.

Three thoughts come to mind right away. The first is that the existence of this feature is an acknowledgment that something is not right with the way we’re currently using our devices. The time you spend engaged with information environments comes at the expense of the time you spend engaged in your physical environment. When companies compete with each other for your attention, and you have a device with you that gives you instant access to all of them at any time, a race ensues in which you and your loved ones lose. By establishing “digital wellbeing” and “digital health” (Apple’s phrase) programs, the operating system vendors are admitting that this has become a problem.

The second thought is that as platform vendors, neither Google or Apple can directly control the form of the information environments their systems host; what they can control is the amount of time users can spend in those environments. You can think of the OS vendors as managing cities. Formerly, the city’s spaces — parks, buildings — were open 24×7, but now they can have operating hours. This is especially useful when some of the buildings contain casinos; some folks need a nudge to go home and sleep once in a while.

The third thought is that the OS vendors are giving users the tools to examine their behavior in these environments and the power to define their operating hours for themselves. This gives us as individuals the ability to engage more consciously with the information environments where we spend our time. I hope the push towards providing us more control over our attention will help steer companies away from business models that drive us towards continuous engagement.

I see the development of the platform vendors’ digital wellbeing initiatives as an encouraging sign. That said, it doesn’t relieve the organizations that design and produce websites and apps from the responsibility of ensuring those environments support the needs and aspirations of their users and society at large. Ideally the most addictive of these digital places will now look for ways to better align their business goals with the goals of their users.