“So, you want to join the league of evil designers? Come in, sit down. Is the chair uncomfortable? Good. Let’s go through your application.
Ah, 2 years in online casinos? Not bad. Nice touch with the ol’ button switcharoo for users trying to cancel a subscription service! And what else… Non-consensual microphone activation for targeted ads? Very impressive. You’ll fit right in!
When can you start? We need all the help we can get with mass collecting personal data. Well, that, and removing all visual indications that something is interactive. Isn’t it the best when people get all confused and frustrated? Oh by the way, we get together on Saturdays to watch them fail.”
Over the last few years, we’ve become disillusioned with services we once touted as revolutionary.
Like French peasants watching Robespierre’s rise to power, we realised that the movement that gave us so much freedom also spawned terrible oppressors. Social media platforms became battlegrounds for international warfare. We realised we were trading intimate secrets for cat GIFs, memes, and ‘what-food-are-you’ quizzes. Unrelenting ads stalked us across digital plains. The internet fell from innocence into a dirty and complex reality.
For those of us who work with creating things for this brave new world, it’s been a time of introspection. Who are we, and what was our role in this mess? And many of us answered:
“Must be all those evil designers, not me.“
It’s convenient to imagine that there is some great evil conspiracy out there, orchestrating sinister plans for world domination. Shifting blame is easy.
The truth is hard. The truth is that we’re all part of the problem.
Every single one of us could easily make a decision that ruins thousands of people’s lives.
Hanlon’s razor states: “Never attribute to malice that which is adequately explained by stupidity.”
This applies to design as much as any other subject. A designer’s job is to consider many sides of the same scenario, but this becomes more difficult as the number of scenarios grow. Even the best of us make terrible decisions because we forget, don’t understand, or aren’t aware of the consequences. This is especially true when we’re designing for products at a massive scale, with a diverse set of users.
For instance, a straight designer may not consider how group privacy settings could involuntarily out a user’s sexual preferences ?. A European designer might forget that the concept of “first and last name” is different depending on countries and cultures ?. A designer on the latest iPhone may insist on high quality images, unintentionally making surfing the web unaffordable and difficult for those with expensive data plans. ?.
These “edge cases” are where we risk doing the most damage. While affecting just a few people is bad enough, as Mike Monteiro put it, when 1% of your user base means 20 million humans — missing to address them or turning a blind eye has significant and severe consequences for real people ?.
Additionally, large user bases can have contradictory realities, which forces you as a designer to make a choice. For example, there isn’t a neutral answer to the question “How should the map of Israel be drawn?”.
The road to hell is paved with good intentions.
Most of us believe in the companies we work at. We believe that it’s a good thing they exist, and they do good things for people. So when we are asked to work towards a goal like “more users should be more engaged on our product”, we do our best to make sure that happens. We look at the least engaged users and identify where they’re struggling. We set a metric we want to move by solving their problems, so that we can check if our solutions are effective. So far, so good right?
The trouble is, the things that really matter to humans — safety, happiness, belonging, and love — are almost impossible to measure. So, you operate on the assumption that the metric you choose (registration, engagement, retention) is a shorthand for those feelings. Your goal is a number, your problem becomes a number that should move, and your solutions move it. But sometimes we forget to question if it’s the right number, what it really means, and if it’s right to move it.
You want people to stay in touch with their friends, but should that mean getting them to spend more time on their phones?
You want your algorithms to help people find content they’ll engage with, but what if people are more likely to engage with content that makes them angry?
The bigger a company gets, the more specific these numbers get. Somewhere along the line you get a problem statement like “help more Chrome users log in to their browser”, and you arrive at a solution of automatically logging people in when they confirm their details for another Google service. There’s no doubt that a lot of good intentions went into thinking through this problem and its solution. The metrics probably looked great, but the numbers couldn’t show how users saw this as an unforgivable breach of trust. ?
It’s easy to be swept up by the narrative that these things are orchestrated by some evil, genius, money-grabbing CEO. Sure, there are people who work on things that have clear ethical implications. ? But to think they are the root of all bad design decisions lulls us into the belief that we’re in the clear. That as long as we are ethical humans working for companies we believe in, we are incapable of “doing evil”. That’s when we’re at the greatest risk of forgetting an edge case, relying too much on our metrics, not questioning whether we’re really doing the right thing. Of making an off-the-cuff call on something we consider minor, forgetting how many people interact with it. Yes, you should be worried. We should all be.
It turns out staying out of the league of evil designers is harder than getting in. How do we stay out of their ranks?
- By being aware of your power to do harm to users. No matter how diligent or careful you are, there is still a chance you will fuck things up. Don’t stop paying attention once something is shipped, and act when you see that something turned out wrong
- Asking “who’s missing?” when going through your solutions. Have you thought about users that are LGBTQ, differently abled, elderly, from different countries, politically exposed, a parent, economically challenged, in an abusive relationship, deeply religious, suffering from eating disorders, depressed, sick, injured etc etc? What’s the worst thing your design decision can do for them? Run your solutions by a diverse set of people. PR people and legal teams are some of the best at catching this type of thing
- Making user emotions equally important as other metrics. Champion their importance. Find a way to make sure you’re achieving your emotion-related goals
- Choosing the right emotions to focus on — Maslow’s hierarchy of needs are helpful here. For example, if you’re designing for ride sharing drivers — there’s no point focusing on pride and esteem if they can’t get enough rest or don’t feel safe
- By always asking yourself “is this really the right thing to do?” even if you believe in the company you’re working for. Never stop asking yourself this. A goal that’s been passed down might’ve lost its initial user focus. Bring it up with superiors you trust to have your back. Chances are, pointing it out is not a spanner in the wheels of evil machinery, but a welcome sanity check
Does this sound like a lot of work? Well, I might just have an uncomfortable chair for you.