Alternatives to 32-bit Apps on macOS Catalina

When a new version of macOS comes out, I usually upgrade my computer relatively soon. I like having access to the latest features, and significant macOS release upgrades are generally trouble-free. That hasn’t been the case with the newest version, Catalina. The trouble stems from the fact that Catalina doesn’t run 32-bit applications. While most major software in the system is now 64-bits, there are still some stragglers — especially legacy apps and drivers that haven’t been (and likely won’t be) upgraded.

That’s why I waited longer than usual before upgrading to Catalina: there was one application in my system that was 32-bits, the driver for my Fujitsu ScanSnap S300M scanner. I knew this driver was incompatible because every time I launched it (under Mojave, the previous version of macOS), I’d get a warning saying that the app would not run in the future. (Here’s a way to learn which apps won’t work: under the Apple menu, go to About this Mac > System Report… > Legacy Software.)

Without this driver, the scanner is useless — even though the hardware is perfectly functional. This device is an important part of my workflow; I use it every other week to digitize most of my paper documents and correspondence. Fujitsu no longer sells this model and has no plans to release 64-bit drivers. So I was stuck. I had two choices: I could hold off on upgrading the operating system (for a while), or I could buy a new scanner. I didn’t like either option. Sooner or later, I’d have to upgrade the OS. And as I said, the scanner itself was in perfect condition; I didn’t need a new one. What to do?

It turns out there was a third option: look for an alternative driver. I found a third-party application called VueScan that works with a range of scanners, including the S300M. It’s been working well for me; the only downside is that it’s a bit slower than Fujitsu’s driver. But given my use of the scanner, it’s not slow enough to merit buying a new device.

Thus far, Catalina has been great. I’m especially enjoying the new Sidecar feature, which allows me to use my iPad as a second screen when I’m on the go. So far, everything is working for me — including my old scanner. The lesson: if you’re contemplating upgrading to Catalina, but are holding back because of legacy software on your system, consider looking for alternatives.

On Google Reader

Yesterday, I tweeted about missing Google Reader:

The tweet touched a nerve; lots of folks have chimed in, mostly agreeing with the sentiment or recommending substitutes.

To be clear, I still read RSS feeds every day. (I use Reeder on the Mac and iOS and synch my feeds using Feedly.) Although I’m open to exploring alternatives, I’m not unsatisfied with my current arrangement. (Ringing endorsement!) So I’m mostly not lamenting the loss of Google Reader’s functionality. Instead, I miss what Google Reader represented: a major technology company supporting a truly decentralized publishing platform.

Google’s brand imparted some degree of credibility to an emergent ecosystem. I suspect a nontrivial number of people must’ve tried RSS feeds because Google provided a tool to read them. It’s great that tools like Feedly, Reeder, Feedbin, NetNewsWire, etc. exist, but none of them have the broad appeal or brand power that Google does.

I said I’m “mostly” not lamenting the loss of Google Reader’s functionality. This is because while current RSS readers offer the basics, Reader was a natural, cohesive component of my personal information ecosystem. Unsurprisingly, it looked and felt like (and integrated with) other Google tools like Gmail and Google Calendar, which I was using extensively at the time. As befit a Google product, Reader also offered excellent search capabilities. None of the RSS readers I’ve tried since offer the same level of coherence and integration that I experienced with Google Reader.

I sense Google Reader was a casualty of Google’s primary business model: selling its users’ attention to the highest bidder. I doubt RSS provided the scale or control required to run a mass advertising business. IMO it’s no coincidence that Google pulled the plug on Reader at a time when centralized social networks (Facebook, Twitter) were gaining traction in the mainstream. (Google+, which the company had launched a couple of years earlier, failed to take off. I wonder if they saw Reader as competition for G+?)

Six years after Google Reader’s disappearance, we’re wiser to the limits of centralized control over news aggregation. Subjectively, I sense many people are rediscovering the joys of blogging. (And, like me, using the social networks mostly as a way to publicize our blog posts.) Podcasts — which are based on syndicated feeds — seem to be more popular every year. While I miss Google Reader, I believe decentralized syndication is an essential part of the web’s future — not just its past. Is the time right for Google (or any of the other major tech platform companies) to embrace the platform again?

Collaborating by Default

Writing in his blog, Benedict Evans highlights the new wave of startups focused on personal productivity, “dozens of companies that remix some combination of lists, tables, charts, tasks, notes, light-weight databases, forms, and some kind of collaboration, chat or information-sharing.”

The cycle of bundling and unbundling functionality isn’t new:

There’s an old joke that every Unix function became an internet company – now every Craigslist section, or LinkedIn category, or Excel template, becomes a company as well. Depending on the problem, that might be a new collaboration canvas, or a new networked app, or a new network or marketplace, and you might switch from one form to the other. Github is a developer tool that also became a network – it became LinkedIn for developers.

What is new is the social nature of the experience. Old-school computing was lonely: the user interacted with his/her computer alone. Even if the system included communications software, such as email, interactions with other people were limited to that software alone. Today, we expect web-based applications to be collaborative by default.

We experience software differently when we assume other people will be sharing the place with us. As I’ve written before, we may ultimately discover that the purpose of social media was to teach us how to collaborate with people in information environments.

New Productivity

Mobile Computing at a Different Level

There are many ways in which people use computers. (I’m talking about all sorts of computers here, including smartphones and tablets.) Some people’s needs are very simple; they may use the machines merely to stay connected with their world. Other people have more complex needs. You can organize these various uses on a continuum that ranges from least to most powerful. Consider at least three levels:

  1. Accessing Content: Computer is used primarily to find information on the internet. Users at this level interact with others through email or social networks, but do so lightly. They spend the bulk of their on-device time accessing content created by others. Many casual users are on this level; it’s also where they have the least power.
  2. Creating Content: In addition to the previous level, computer is also used as a content-creation device. While users at this level may spend a considerable amount of time accessing content created by others, they also produce lots of content of their own. Many people who use computers professionally are on this level.
  3. Tweaking Workflows: In addition to the previous two levels, the computer is also used to modify how the computer works. This includes enabling new workflows through programming or scripting. This level affords most users the most power.

(There’s an even higher level, which is closer to the machine’s metal and affords a very small subset of people tremendous power. I’m not going to get into that here; I’m concerned with how most of us interact with these devices.)

Consider a transportation analogy. On level one, you are a passenger in public transport. On level two, you are driving a car. On level three, you are a mechanic, capable of making modifications to the vehicle to fix it or improve its performance. As with transportation, the higher the level, the more complexity the user must deal with.

Which level are you? If you’re like most people, your at either levels 1 or 2. This is OK; very few people take advantage of level 3. Learning to program requires great effort, and for most uses the payoff may seem to not be worth the investment of time required.

I was around eight years old when I first interacted with a computer: a TRS-80 Model I. As with most machines of this vintage (late 1970s), when you sat down in front of a Model I you were greeted by a command prompt:


The computer could do very little on its own. You needed to give it commands, most often in the BASIC programming language. (Which incidentally just turned 50.) So level 3 was the baseline for using computers at this time. We’ve come a long way since then. Most computers are now like appliances; you don’t need to know much about how they work under the hood in order to take advantage of them. However, knowing even a little bit about how they work can grant you superpowers.

Level 3 has come a long way from the days of the TRS-80. I’ve been messing around with the new Shortcuts functionality in iOS 12, and am very impressed with how easy it is to string together several apps to accomplish new things. For example, the Home ETA shortcut strings together the Apple Maps, Contacts, and Messages apps. When you install the shortcut, you configure it with your home’s street address and the contact information of the person you want notified. When you activate the shortcut (which you can do through various means, including an icon on your home screen), Apple Maps figures out your current location and uses it to calculate how far you are from home. Then it passes that information to Messages, which then sends your estimated time of arrival to your selected contact.

It’s not a mind-blowing functionality, but the fact that iPhones and iPads can do this at all is impressive. iOS users can now create arbitrary connections between components of the system, opening up possibilities that were previously difficult or impossible. Shortcuts also promises to make these much better as productivity tools. It’s the old Unix “small pieces loosely joined” philosophy — but in a platform designed to be less of a computer than an appliance. It opens up level 3 possibilities for level 1 and 2 users, without asking that they become mechanics.


You’re likely to run across lots of information during your day. Much of it is disposable, but some you’ll probably need to refer to in the future. A lot of it might be useful someday, but you just don’t know right now. Given how easy it is to search digital information, and how cheap storage is these days, you may as well keep it. I’ve long experimented with “digital junk drawer” applications for this use. I’ve tried Evernote, Yojimbo, Google Keep, Apple Notes, and Org Mode for Emacs, but my favorite thus far is Microsoft’s OneNote.

I keep a lot of stuff in OneNote: clips from web pages, quotes from famous people, impressions from books I’ve read, ideas for future presentations, meeting minutes, half-formed thoughts, etc. OneNote provides easy means to clip snippets of information from web pages and other apps whether I’m on my Mac, iPhone, or iPad. This makes it possible for me to keep a central repository of things I’m learning as I go about my day. It all syncs through Microsoft’s cloud, so all three devices have the latest information on them.

But OneNote is more than just a scrapbook for me: It’s also where I keep my projects organized. Whenever I start a new project, I open a new notebook in OneNote devoted exclusively to it. OneNote notebooks can have “sections” in them. Most of my projects have at least two sections: “Notes” (random notes, including scribbles to myself) and “Meetings,” where I record meeting minutes. Some notebooks also have other sections, such as “Admin” and “Research.” I aim for consistency with the naming and color schemes I use to differentiate these subjects. This allows me to quickly make sense of what I’m looking at when I switch projects.

Continue reading

Twitter and Third-party Apps

Yesterday, Twitter implemented significant changes to its APIs. As a result, accessing Twitter through third-party apps like Twitterrific and (my favorite) Tweetbot is now much worse. For example, one of my favorite Tweetbot features was its “Activity” tab, which gave me information about how people were interacting with me in Twitter. Now, it’s gone.

For me, this is not a trivial change. Twitter is my primary social network; I spend lots of time there. Or rather, I should say I spent time there. The change is making me rethink how much of my attention I apportion to this place. You see, it turns out I don’t like being in Twitter as much as I like being in Tweetbot. There are several reasons why.

To begin with, Tweetbot has native apps for both operating systems I use day-to-day (macOS and iOS.) These apps are coherent (if not 100% consistent) between both platforms: I can easily move between one and the other. Twitter, on the other hand, has an iOS app but discontinued its first-party macOS app earlier this year. So accessing Twitter on the Mac means either using the website or through a third-party app like Tweetbot.

The timeline — the main component of the Twitter experience — is also significantly different between Tweetbot and Twitter. Whereas the former presents a simple chronological list of items, the latter scrambles the order of tweets based on what it deems to be interesting to me. Parsing out what I’m looking at (and why) is more work than I want to put into it.

Another major difference between the two is that Tweetbot doesn’t show “promoted” tweets. (Read: ads.) That means that the posts I see are the ones I signed up for by following particular accounts, not ones that paid for the privilege of being brought to my attention. (I suspect that herein lies the primary driver behind the change to Twitter’s API; ads is how the company makes money.)

The bottom line: Twitter is a lot less compelling to me today than it was two days ago. I will probably be spending less time there. But where was it that I was spending my time? Am I a Twitter user or a Tweetbot user? While the two share a lot in common, they’re different information environments. While the underlying information is the same, the experience of the environments is very different. I like being in Tweetbot, less so being in Twitter.

And let’s look at this from Twitter’s perspective: the company will probably notice that I’m spending less time there, but will this affect their revenue? After all, I didn’t see many ads while accessing their system through a third-party client. So I understand why they’d want alternate-reality versions of Twitter — like the one Tweetbot offered — to go away in the near-term. But what does this mean for them in the long term, if it costs them loyal users like me?

Changing Your Personal Information Environment

Some people who do most of their work with computers also have some control over how that work is done. For example, as an independent information architect, I am my own IT department; I choose what tools I use. At this stage in my career, I’m proficient with most of them. Still, it’s important to occasionally​ look around for more efficient/effective ways of doing things.

Changing key components of your personal information environment is not something to undertake lightly. There are costs to doing so. The least onerous is the cost of the software itself; the big investment is in time spent learning new workflows and migrating to the new tool.

The various components of your personal information environment sit on a stack. At the bottom of the stack — the foundational layer — is your OS platform of choice. In my case, this is macOS. I’ve been using Macs for almost thirty years, changing to another platform (Windows, for example) would be tremendously costly.

Switching components higher up in the stack would be less onerous. For example, although I use Gmail for my email needs, I access it using Apple’s I could change mail clients fairly painlessly; I’d just need to point the new application to my Gmail accounts. Yes, I’d lose some functionality in the process (e.g., links to individual messages from OmniFocus), but there’s not much work I’d need to do other than learn the new application. So if a new mail client comes along that is radically better than, I’d be willing to give it a spin.

I’m currently testing an application that would replace one of the foundational layers of my information environment: OneNote. I’ve used OneNote as my note-taking and information-gathering system for many years. I have many dozens of notebooks in OneNote, and have internalized various workflows around this app. Changing this layer of my stack would come at a considerable cost.

Are big changes such as this one worth it? That depends on whether the new tool allows you to do important things that the old tool won’t, or allows you to do similar things significantly better/faster. To be worth it for me to switch from OneNote, I’d need to see orders-of-magnitude improvements. Alas, it’s difficult to evaluate worthiness without extensive testing, and that in itself is a big time sink. That said, there are also significant opportunity costs to continuing to use a tool that may be less efficient/effective.

Making time to experiment with new components in your personal information environment can open up new possibilities; it can make you more efficient, and even give you new superpowers. But undertaking such changes is not something to be taken lightly, as it can come with significant costs. Sometimes, leaving well-enough alone is the wiser choice.

Design and Implementation Trade-offs

A couple of days ago I wrote about how important it is for designers to know their materials. The material for interaction designers is code, so a baseline understanding of what code can and can’t do is essential for designers to be effective.

I learned this principle in one of my favorite books: The Art of Computer Game Design, by Chris Crawford (Osborne/McGraw Hill, 1984). Crawford was one of the early Atari game designers/implementors. (I use the slash because the distinction wasn’t as clearly drawn then as it is now.) His book lists seven design precepts for computer games. The seventh of these is titled “Maintain Unity of Design Effort,” and includes the following passage:

Games must be designed, but computers are programmed. Both skills are rare and difficult to acquire, and their combination in one person is rarer still. For this reason, many people have attempted to form design teams consisting of a nontechnical game designer and a nonartistic programmer. This system would work if either programming or game design were a straightforward process requiring few judicious trade-offs. The fact is that both programming and game design are desperately difficult activities demanding many painful choices. Teaming the two experts is rather like handcuffing a pole-vaulter to a high jumper; the resultant disaster is the inevitable result of their conflicting styles.

More specifically, the designer/programmer team is bound to fail because the design will make unrealistic demands on the programmer while failing to recognize golden opportunities arising during programming.

Crawford illustrates this by using a couple of examples from his career. One that’s stuck with me comes from the development of the game EASTERN FRONT 1941, a war game for the early Atari 8-bit computers. While he was programming the game (which he’d also designed), Crawford spotted an opportunity: a simple addition to its calendar routines would allow color register values to change as game time progressed. This allowed the color of trees to change to reflect the seasons. A minor detail for sure, but one that added depth to the experience. (Keep in mind that programming for these early computers meant always optimizing for limited memory. This minor change came at the expense of only 24 bytes of computer memory; a “cost-effective improvement” in Crawford’s words.)

Software development is much less painful today than it was in the late 1970s and early 1980s. Still, limited budgets and timeframes call for trade-offs. Knowing where the opportunities and constraints are helps when you’re called to decide what to include and exclude in the work.

Tools of the UX Trade

The tools we use when we design have an important influence in the work we produce. Conversely, sometimes the work we want to do can’t be carried out with the tools we have. This nudges us to either look to other fields for inspiration or invent new tools altogether.

As a child, the architect Frank Gehry was fascinated with fish. This fascination carried through to his work. In the 1980s, Gehry started producing fish-shaped lamps, and eventually won a contract to produce a large fish-shaped sculpture for the 1992 Olympic Games in Barcelona.

Sculpture by Frank Gehry, Barcelona (1992.) Image by Till Niermann, CC BY-SA 3.0 via Wikimedia. (
Sculpture by Frank Gehry, Barcelona (1992.) Image by Till Niermann, CC BY-SA 3.0 via Wikimedia.

Gehry’s team needed to figure out how the fish would be built. Traditional architectural drawings are best when describing buildings composed of flat planes and volumes, but this structure’s undulating surfaces were anything but. The standard tools of trade trade weren’t going to cut it.

One of Gehry’s collaborators suggested they look at a software tool called CATIA, which had been developed by French aerospace firm Dassault Systems for designing aircraft. CATIA allowed Gehry’s team to delegate the complex calculations to computers, and made the fish structure a reality.

CATIA also opened new possibilities for the firm — and the field of architecture more broadly. Buildings such as the Bilbao Guggenheim Museum and the Walt Disney Concert Hall in Los Angeles wouldn’t have been possible to design and build using traditional tools. Introducing new tools into the mix made a new type of building possible, and the field of architecture hasn’t been the same since.

Walt Disney Concert Hall in L.A., by Frank Gehry. Image by Visitor7, CC BY-SA 3.0 via Wikimedia. (
Walt Disney Concert Hall in L.A., by Frank Gehry. Image by Visitor7, CC BY-SA 3.0 via Wikimedia.

When I look at the tools UX designers use, I mostly see software aimed at designing screen-based user interfaces. Applications such as Photoshop, Illustrator, and Sketch are excellent at rendering forms on flat screens, but not much more than that. This constrains possibilities; as the cliche says, if all you have is a hammer, everything starts to look like a nail… and these apps are all hammers.

We also lack tools for exploring the semantic structures and relationships that underpin information environments. The closest we come is whiteboards and diagramming apps such as Visio and OmniGraffle. I’ve met many taxonomists whose primary tool is Excel; software designed for manipulating numbers!

There are clear gaps in this space. It’s surprising, given that the focus of UX design is often software itself. Why haven’t we produced tools suited to the needs of designing information environments? Is it a matter of the market not being big enough? Or do they exist and I’m just not aware of them? What tools from other fields could we adopt to meet our needs?