Mark Wilson, writing in Fast Company:
According to a new research paper published by the analyst firm Forrester—for which researchers interviewed nearly 200 design teams and dozens of frontline workers in fields like retail—the enterprise software we use at work is slowing us down, and for all sorts of reasons, from individual components of the UI to the workflows that take us from one piece of software to another.
We connected with Andrew Hogan, Forrester’s principal analyst specializing in design, who led the research. He points out some of the biggest problems he sees in these tools and offers critical insight on how some companies are fixing enterprise UX.
Hogan discusses several issues with enterprise software, including slowness, unclear workflows, and — my favorite — bad labels. The article also covers some reasons why enterprise software tends to suck.
Gilad Edelman writing in Wired:
The task of regulating an increasingly out of control digital environment often looks like a multifront war against various enemies: privacy breaches, hate speech, disinformation, and more. What if we had a weapon that could bring all those armies to their knees?
The article highlights a “nascent movement” of people who believe the business model underlying these environments — targeted, personalized advertising — is the main problem. Rather than focusing on front-end efforts to legislate what happens in these places, a more impactful approach would be to make the business model itself illegal.
If you’ve read Living in Information, you won’t be surprised to know I agree with the assessment that business models are critical. That said, I don’t think there’s a one-size-fits-all approach to this issue. I can easily imagine targeted advertising would make some information environments more useful while also supporting user goals.
As a user myself, I’m willing to give up some of my privacy if I get something tangible in return, and I clearly understand who’s using my information and for what purposes. For example, I don’t mind if the place where I do my shopping shows me ads that meet my needs. I’m there to buy stuff, after all, and the place knows who I am and my preferences and shopping patterns. Knowing those things, it can tell me about new products that will make my life better. That has value to me.
I don’t feel the same way about places where I meet with family and friends or have civic conversations with my neighbors. The general idea behind targeted advertising — that the system will learn my preferences so it can better persuade me — is profoundly at odds with my goals in those environments.
Why Don’t We Just Ban Targeted Advertising?
Must-read post by Om Malik:
“Content” is the black hole of the Internet. Incredibly well-produced videos, all sorts of songs, and articulate blog posts — they are all “content.” Are short stories “content”? I hope not, since that is one of the most soul-destroying of words, used to strip a creation of its creative effort.
The World Wide Web is the most powerful medium for learning, sharing, and understanding our species has created. Our descendants will judge us harshly on the first thing we tried to do with it: commoditize our attention by packaging our insights and humanity into transactional units.
(The optimist’s take: It’s still early days; we haven’t yet tapped the web’s full potential.)
The Problem With “Content” — On my Om
Max Read, writing for New York magazine:
How much of the internet is fake? Studies generally suggest that, year after year, less than 60 percent of web traffic is human; some years, according to some researchers, a healthy majority of it is bot. For a period of time in 2013, the Times reported this year, a full half of YouTube traffic was “bots masquerading as people,” a portion so high that employees feared an inflection point after which YouTube’s systems for detecting fraudulent traffic would begin to regard bot traffic as real and human traffic as fake. They called this hypothetical event “the Inversion.”
And it’s not just traffic. The article highlights other aspects of online life that aren’t what they appear to be, from businesses and content to the people behind them. As participants in digital information environments, we must increasingly grapple with thorny philosophical questions. What is real? Who is a person? What’s trustworthy?
This situation isn’t inherent to digital information environments. It’s the result of bad incentive structures. Trafficking in advertising — the buying and selling of human attention — has had pernicious effects on the internet. It’s created an economy of deception in one of the most beautiful systems our species has created.
How Much of the Internet Is Fake?