Fixing Tech’s Design Problem

UX designers, engineers, and the iPhone's co-designer are reassessing the priorities behind the ways we interact with apps and devices.

An array of oversized devices, buttons, and interfaces are manipulated by various individuals.
Linn Fritz

In early 2017, a group of Silicon Valley technologists gathered for an industry dinner in the Bay Area. Around the table were CEOs, founders, venture capitalists, product managers, and designers. Bethany Bongiorno, who had recently resigned from her role as director of software engineering at Apple, addressed the group. “These things that we’re very involved in creating, do they make us happier?” she asked. “Are we ultimately happier, or are we not, as a society?”

“There was a lot of divisiveness around the table,” recalls Bongiorno, recounting the response that evening. Imran Chaudhri, Apple’s outgoing director of design and a co-designer of the iPhone, was also in attendance. He and Bongiorno had left Apple together in 2016, staying on for a few months into the new year to finish off projects. They would soon marry and launch a new technology venture called Humane, aiming to develop a product that would leave users feeling, well, happier. But this was still to come. On that evening in early 2017, the idea that tech platforms and the devices we use every day aren’t designed with the user’s well-being primarily in mind was “very polarizing,” according to Chaudhri. “To some extent, it still is,” adds Bongiorno. “It was something that we, along with everybody else, have been thinking about for the past few years.”

Since 2016, the industry has been the focus of a growing public reckoning around the effects and implications of technology and data. The Cambridge Analytica scandal was implicated in that year’s Brexit referendum and US presidential election, and Facebook CEO Mark Zuckerberg soon found himself testifying about Russian troll farms and electoral meddling before Congress. “It’s clear now that we didn’t do enough to prevent these tools from being used for harm,” he remarked. 

But it was increasingly clear to others in tech that platforms and products weren’t simply neutral entities suddenly beset by bad actors. They had features designed into them — features so seamless that the user hardly registered them as being designed at all — that were accelerating and enabling privacy breaches, polarization, hate speech, endless distraction, and so on. Dark patterns, a term coined by Harry Brignull to describe the intentional manipulation of user behavior through UX design (there are also less polite terms for it), seemed to be rife, in more or less extreme forms, throughout our devices. Silicon Valley UX designers and mid-level engineers, in particular, began to speak out — and eventually walk out.

Bongiorno and Chaudhri do not regard themselves as part of this wave. (They left “before it became in vogue to talk about it,” says Chaudhri.) They are proud of the work they did for Apple, which includes, for Bongiorno, helping produce the first iPad and leading the software program for iOS and macOS. For Chaudhri, well, the list is long. As Apple’s UX design wiz, he was responsible for many of the frictionless features that made Mac OS X intuitive to use, laying the groundwork for multiple products to come: the iPhone, the iPad, the Apple Watch. The way windows are fluidly minimized into the navigation bar, so that users recognize where they go? That’s Chaudhri and his team. The spinning wheel of death that kicks in when an application stalls? That’s him, too. When the iPhone launched in 2007, he had invented much of its interface. Five years later, Chaudhri designed the product’s “Do Not Disturb” function. But after 21 years at Apple, it was time to move on.

“Inside, getting people to understand that [distraction] was going to be an issue was difficult,” Chaudhri remarked to Fast Company in 2018. “Steve [Jobs] understood it… internally though, I think there was always a struggle as to how much control do we want people to have over their devices. When I and a few other people were advocating for more control, that level of control was actually pushed back by marketing. We would hear things like, ‘you can’t do that because then the device will become uncool.’”

At the time, questionable design features were being scrutinized. Tristan Harris, a former Google product manager and design ethicist who left the company in 2015, called out Big Tech for monopolizing users’ attention in a series of TED talks and interviews. He would eventually launch the Time Well Spent movement and later the Center for Humane Technology. Harris called smartphones and mobile apps “the slot machine in your pocket,” and pointed out the obvious: tech companies whose business model relies on data brokerage have a financial imperative to keep users on their devices or platforms for as long as possible. Design will follow.

“How do you design products that are fundamentally designed for people to take full advantage of technology, not for business models and companies to take full advantage of people?”

By 2017, Harris’s arguments seemed to be vindicated by the likes of Loren Brichter, who had designed the pull-to-refresh feature implemented in early versions of Twitter. “Smartphones are useful tools,” Brichter told the Guardian. “But they’re addictive. Pull-to-refresh is addictive. Twitter is addictive. These are not good things.”

The Facebook engineer who created the “like” button, the developers of push notifications, a programmer behind YouTube’s recommendations algorithm — all spoke out about the harmful effects of their creations. Even Nir Eyal, the author of the 2014 bestseller Hooked: How to Build Habit-Forming Products, had pivoted, authoring the self-help book Indistractable: How to Control Your Attention and Choose Your Life, seemingly without any sense of irony.

Underpinning this Frankenstein moment was the apparent belief that UX design, in and of itself, was capable of causing all manner of ills, particularly addiction. Eyal’s “hook model,” drawing loosely on the work of Stanford University’s Persuasive Technology Lab (which is now called the Behavior Design Lab), did a lot to popularize the notion that design features such as pull-to-refresh — a type of “variability award,” a term borrowed from the gambling industry — could have neurological effects on users.

“Research shows that levels of the neurotransmitter dopamine surge when the brain is expecting a reward,” he wrote. “Introducing variability multiplies the effect, creating a focused state, which suppresses the areas of the brain associated with judgment and reason while activating the parts associated with wanting a desire.”

Harris and others seized on this idea, making “dopamine addiction” a core element of their tech critique. The idea that our brains could be programmed, hijacked, or controlled by our devices is perhaps best exemplified by The Social Dilemma, an alarmist Netflix documentary produced by the Center for Humane Technology that presents a fictional teenage boy as the zombified plaything of three personified algorithms-turned-puppet-masters. (The Center for Humane Technology turned down an interview request for this article.)

Such casual references to neuroscience should be treated warily. In her book Don’t Be Evil: How Big Tech Betrayed Its Founding Principles — and All of Us, the Financial Times tech columnist Rana Foroohar claims that “our devices and the things we do on them are just as addictive as nicotine, food, drugs, or alcohol, and there is a trove of research that proves it.” But the research she cites is a Goldman Sachs report, not an academic study, showing that the average user “spends 50 minutes per day on Facebook, 30 minutes on Snapchat, and 21 minutes on Instagram.” Apart from whether or not the numbers seem particularly excessive, the report doesn’t reveal whether user behavior on these apps can be classified as addictive.

The truth is that research into the relationship between the use of technological devices and addiction is largely inconclusive — addiction, of course, is a complex phenomenon with multiple factors. Where screen time interferes with sleep and socializing, there are measurable health outcomes. But what exactly happens in a user’s synapses when a push notification pings on their phone remains unclear. In his recent book, The Idea of the Brain, the biologist and historian Matthew Cobb dismisses talk of addictive dopamine hits from social media as “nonsense” and “neurobollocks.”

That’s not to say that tech doesn’t have a design problem, only that framing it as a matter of neurological hijacking might be inaccurate and unhelpful. But whether we have the hard science to quantify it or not, it is evident that plenty of consumers — and indeed plenty of designers — experience digital products as distracting, harmful, and data-exploitative. So how should users and conscientious designers approach the problem? 

On the user’s part, withdrawal seems like an obvious knee-jerk reaction; the rise of “digital detox” camps and self-help books over the past decade is testament to this. Sensing a change in public perception, large tech companies have in recent years launched various tools to help users reduce their screen time — for example, Apple’s Screen Time feature and Google’s Digital Wellbeing Experiments

But the fact remains that users putting their devices down is not in the interest of these businesses. Features and controls that enable users to change data privacy and usage settings are rarely, if ever, default, and are typically difficult to locate.

“The controls exist for you,” Chaudhri said in 2018. “Yet it’s incredibly hard to know how to use them and to manage them. You literally have to spend many days to go through and really understand what’s bombarding you and then turn those things off in a singular fashion. So for the people who understand the system really well, they can take advantage of it, but the people that don’t — the people that don’t even change their ringtone, who don’t even change their wallpaper — those are the real people that suffer from this sort of thing.”

Should designers just do better, then? Mike Monteiro thinks so, arguing that UX designers should make a point of designing ethically. In his 2017 text, “A Designer’s Code of Ethics,” Monteiro (who would later write the book Ruined by Design) urged his fellow designers to take individual responsibility for their creations. “By choosing to be a designer you are choosing to impact the people who come in contact with your work, you can either help or hurt them with your actions,” he wrote. “Have you veered off course? Correct it. Is your workplace an unethical hellmouth? Get another one.” Easy, right?

Not so fast. While it’s obviously good to aspire to do good, that is no guarantee that an individual designer’s work will have positive effects across a platform or device with billions of users. The inventors of Facebook’s “like” button, for example, originally intended it to “send little bits of positivity” to its users — which sounds lovely. As Harris and others have argued, it’s not that UX designers in Silicon Valley have conspired to create products that make everyone miserable. They are simply following the financial imperatives of a business model that monetizes clicks and scrolls and taps and shares. The individual designer’s conscience is neither here nor there compared to the powerful economic incentives that govern the businesses they work for.

For Bongiorno and Chaudhri, it made no sense to leave Apple and join another giant tech company. They decided to build from scratch with Humane, a secretive venture that is developing a product that they hope will rival the iPhone. “We wanted the name to be about how you feel when you work here, and ultimately, how we wanted people to feel when they use our products,” says Bongiorno. They can’t say much about the product or its business model, although they insist they want to remain “self-funded as long as we can.” As such, it is anyone’s guess how Humane will respond to what its creators call “a trust crisis” in tech. But they agree that fundamental changes need to take place in the industry.

“The focus needs to get back to people,” Chaudhri insists. “How do you design products that are fundamentally designed for people to take full advantage of technology, not for business models and companies to take full advantage of people?” As self-proclaimed tech optimists, Bongiorno and Chaudhri believe such a recalibration is possible — but only by starting their own business bottom up. “We’re obviously not going to be able to solve even a fraction of [the trust crisis],” says Bongiorno. “But what we can do, and what we need to do, is just build better.”

Chaudhri is famous for his knack for creating intuitive and frictionless interfaces, and Humane’s product will be no different, say its creators. The product is “mind-blowing,” claims Bongiorno. “It’s so obvious. Of course it should be like that. That’s Imran’s superpower.” Creating seamless, friction-free user experiences has been the stated design ideal of Silicon Valley for decades — it is a school of thought championed, early on, by Steve Krug in his influential 2000 book Don’t Make Me Think. However, it is coming under increasing scrutiny. This design philosophy, critics argue, treads a fine line between convenience and deception. There’s a convenient button to share an article you haven’t read on Twitter, but should it be quite so convenient? It’s convenient to click an already-highlighted consent button before installing an app, but does this disguise the complexity of the transaction? It’s convenient to have all of your files automatically synced via the iCloud, but shouldn’t you be asked first? 

Not everyone is in a position to build a new iPhone-like product from scratch. John Fallot and Joel Putnam created the Prosocial Design Network in 2020 after they met at a talk-back session for the Center of Humane Technology’s Your Undivided Attention podcast. Their design approach centres on smaller fixes to existing platforms. “I really like what [the Center for Humane Technology] does in terms of flagging possible harms that are prevalent through the current business model of large tech companies,” says Putnam. “But we’re specifically interested in what it is that causes human-to-human interaction to go sideways where it wouldn’t in person.”

Putnam and Fallot have compiled a growing library of design interventions that have been shown to improve discourse on social media. (There is a whole grading system, from “tentative” to “validated,” indicating how much evidence there is for the success of each feature.) One example is “disallow sharing of unclicked links,” which is currently being trialed by both Twitter and Facebook. (A 2016 Columbia University study estimated that 59 percent of links posted on Twitter had not been opened prior to sharing.) Others include “messages prebunking misinformation on controversial topics,” and, in a separate tab for untested interventions, “delay-by-default posting,” which gives users a moment’s pause before publishing a post or comment. “There are some situations where it makes sense to add some friction, especially in the case of social interactions,” says Fallot, who is a certified UX designer.

“A lot of this is about creating the space to have ethical conversations in work, and that’s really hard when all of the incentives and ways of working are about shipping products at speed.”

Fallot and Putnam acknowledge that introducing such friction is often not in the short-term interest of tech corporations, but they also suggest that designers are in a position to push back against unhealthy models by making a business argument. “People who use a site that feels good to be a part of are more likely to be long-term users, feel better about their use of it, and encourage their friends to join it,” says Putnam. “I think [the tech industry] is noticing that and starting to make a case for it.”

While this may be the case for some companies, there are still shocking instances of business interests trumping prosocial design interventions. Putnam and Fallot point to the example, in the lead-up to the 2020 US presidential elections, of Facebook tweaking its recommendation algorithm to dial down the vitriol in favor of more reliable news sources. After a few weeks, the company’s engineers wanted to make the “nicer” algorithm permanent, but it was ultimately rolled back. “They ended up switching it off,” explains Putnam, “because, while people were happier with it, they didn’t spend as much time scrolling through Facebook.”

Design decisions can potentially wield enormous power. Consider the recent kerfuffle between Apple and Facebook over data privacy permissions, and how the introduction of a clear opt-out choice reveals the data brokerage model to be exceedingly brittle. But in order for interventions such as those proposed by the Prosocial Design Network to be introduced, designers need agency and support within the businesses for which they work. Here, Fallot is optimistic about what he describes as an ethical turn in Silicon Valley investment. “Impact investing in corporations and companies that pursue an additional ethical return on investment,” he says. “Those offer an additional financial return on [prosocial design].”

Considering the designer’s role within the broader workplace is something that Sarah T. Gold, the founder of London-based design consultancy Projects by IF, focuses on. She has also cataloged progressive design interventions, but primarily emphasizes so-called responsible data patterns. “One of the most powerful moments was giving the patterns names,” says Gold. “We hear from product teams that just naming patterns gives teams an idea to center on.” Patterns cataloged on IF’s website include “just-in-time consent,” “short-distance sharing,” and “setting permissions up-front” — all different options for handling and accessing data as well as presenting the choices to the user on-screen. Benefits, limitations, and examples of each pattern are provided, acknowledging that all patterns will not fit every business model.

“There’s a big data approach that organizations take when they think about data, because it’s the predominant business model and it’s something that’s grown out of Silicon Valley,” Gold says. “How can we move to a different mindset? How do we move from the return-on-engagement model to return-on-trust?”

UX designers can support such a shift. “Designers are really great facilitators, in that we’re able to hold the space for conversations across different disciplines,” Gold says. But she also stresses the importance of company structure. “It can be one of the biggest barriers for change if you don’t have a leadership cover for this kind of work. A lot of this is about creating the space to have ethical conversations in work, and that’s really hard when all of the incentives and ways of working are about shipping products at speed. So to say that we need to add friction into our product process, that takes bravery and leadership cover.”

Such leadership cover might in part be found in the role of chief data officer (CDO), suggests Gold. The CDO decides what data an organization captures, how it collects it, and to what end. Two years ago, 67.9 percent of large companies reported having a CDO on staff, which was up from only 12 percent in 2012. Of course, this increase is no indication that these corporations are handling data any more conscientiously just because they have a CDO on board. But Gold views the emergence of a dedicated role for data handling with cautious optimism: “The emergence of the CDO role is really interesting because it is leading a different kind of transformation in business.”

Tech’s design problem cannot be viewed in isolation from the business models that govern its biggest players. Users can go some way toward deactivating features that they find offensive, distracting, or otherwise harmful. Individual designers can reflect critically about their work. But the real responsibility resides at the dinner table at which we began: with the investors, the CEOs, the heads of monetization, the engineers, and yes, the data officers and designers too. Is it naive to think that such a group might have a conversation, one day, in which concern for the user — their privacy, their attention, their happiness — is a given, rather than a provocation? 

Follow The Reboot

Join a growing community that’s examining the state of the internet and exploring its future. Subscribe to our newsletter.

An array of oversized devices, buttons, and interfaces are manipulated by various individuals.

Artwork By

Linn Fritz

Contact Us

Have an idea for a story or illustration? Interested in discussing partnerships? We want to hear from you. Send us a note at info(at)thereboot(dot)com.

Recommended Reading