Few people think much about web accessibility. Even fewer people understand it, and without understanding there won’t be empathy or change. However, a string of high-profile class action lawsuits in 2019, like the one against Beyonce’s management company, brought the issue to light — and I expect we’ll see a lot more companies prioritizing web accessibility in 2020.

At the same time, the US and many European countries are more rigorously enforcing their by-laws applying to free content accessibility. Companies will no longer be able to afford to ignore this issue.

It’s no surprise that visually impaired people are demanding better access. Populated with Instagram ‘stories’ and online stores that display 360-degree, high-definition product images and video, the web has become a visual public space. That puts many people at an unnecessary disadvantage.

It’s not only lawsuits driving demand for greater visual access, but the growing problem of poor vision. According to the World Health Organization (WHO), around 1.3 billion people have some visual impairment, such as low vision, color blindness, and (partial) blindness. That’s nearly 20 percent of the global population — including your website visitors — who struggle with accessibility.

Adjust your mindset

I’ll be the first to accept that the US is more lawsuit-happy than most. Maybe you live in another country, with a different legal culture and don’t think you need to worry about accessibility. That’s the wrong mindset.

As a frontend developer, I do care about web accessibility; it’s my responsibility. I don’t want visitors just to have ‘access.’ I want them to have the best possible experience. Let’s take a closer look now at some of the different types of visual impairments people have and how to address them on a site.

Light sensitivity

Light sensitivity is a very common issue, especially for people who sit in front of computer screens all day. Light-sensitive people can find it hard, painful, or even impossible to read and concentrate under bright lights, on bright screens, or on web pages where bright colors are combined.

This is why most developers like me switch to dark themes in their dev tools, IDE, or their OS (if it has one). It’s also why popular apps like Twitter, Google, Facebook Messenger, and recently iOS, provide “Dark Mode.”

One difficulty is that there’s no one standard for light sensitivity. It varies by person and setting, so it’s impossible to devise one configuration set that works for all light-sensitive people.

Solution: Light sensitivity themes

Offer “Dark Mode” or a “Light Theme” for your users and allow them to set the brightness, essentially letting them decide for themselves.

There are several approaches to achieve this, depending on your technology stack and browser support. A straightforward way is to combine a CSS variable and the CSS invert method: filter: invert().

By defining invert(1), the browser inverts all colors available in your apps to the exact opposite matching colors.

/*Define css variable for dark/light mode*/

:root {

  --nightMode: 0;


/*Apply the mode change to the whole app on */

body {

  filter: invert(var(--nightMode));


This filter effect also applies to all images within the app. You might want to add some code to make sure colors are reserved even in inverted mode (dark or light).

Warning: filter is still not supported in IE. If IE support is critical for your app, consider using other approach such as CSS-in-JS (styled components for Vue or for React).

Contrast sensitivity

Contrast sensitivity occurs where people struggle to read text that is placed over images and videos. This happens when white text is placed on a light background, black text is placed on a dark background, or text is placed on a visually ‘busy’ background.

Solution: Tools and resources

Unlike light sensitivity, contrast sensitivity issues are easy to identify. Popular browsers including Chrome and Firefox now include a ‘contrast score check’ in their dev tools, which flag any page sections that aren’t visible enough. You shouldn’t rely solely on these tools, however, because the automatic scores are not always 100 percent accurate.

To address contrast sensitivity fully, refer to the Web Content Accessibility Guidelines (WCAG). It states that text, or images of text, must have a contrast ratio of at least 4.5:1. Exceptions are large text (where it’s 3:1), invisible, and decorative text and logotypes, where the text is part of a brand name.

To summarize some of the main points:

  • Addressing contrast sensitivity shouldn’t be confused with changing color schemes. It’s all about ensuring that people can read webpage text by providing the optimal contrast between text and its background.
  • Larger text has a lower contrast criteria. This is because larger text is, by nature, easier to read. 18 point text or a 14 point bold text is considered “large text.”
  • Contrast also applies to images of text, not just strictly applied to fonts — for example, a JPG image of a brand logo.

To ensure your website passes the contrast test, check out this free Contrast Checker tool by WebAIM.

Color blindness

Color blindness (or color vision deficiency) makes it difficult (or impossible) for affected people to identify or distinguish between specific colors.

Imagine a colorblind person visits an online store to buy a red t-shirt and sees only green ones. How would this visitor know which ones to buy? 

Solution: Translate colors to text

In some cases it will be impossible to adapt an image to appear correctly for someone with color vision deficiency. For these, the options are to either provide chat/live support or text prompts (or, ideally, both). 

To provide text prompts, we add the name of the color as text to images using the alt attribute. So instead of saying that an image is a ‘t-shirt,’ we would explicitly state that it is a red t-shirt. The more specific, the better. Yellow is somewhat helpful, but ‘mustard yellow’ is much more descriptive.

This will involve some light coding, or you can use one of the image management tools on the market that help automate this process.

Solution: Pattern your colors

Another option is to provide a unique pattern to represent each different color on your webpage. The standard approaches are not straightforward — either designers need to manually code something or use image editing software like Photoshop or Gimp to create an extra resource for each colorblind case.

The free version of my company‘s own software includes a transformation “e_assist_colorblind effect” to make this process easier. For example, you can add stripes to highlight the difference between hot (red) and cold (green) colors. 

Hopefully this has given you some ideas for how to address the most common issues around visual accessibility. By focusing on this important issue, you’ll not just avoid litigation, but you’ll attract more site visitors, raise engagement, and potentially boost revenue as well. 

Published January 3, 2020 — 08:00 UTC


Accessibility is essential for developers and businesses who want to make a high-quality website and reach as many people as possible. It is often being referred to as A11y, which is an abbreviation of the word accessibility itself.

A is the beginning of the word,

11 represents letters in the middle, and

y at the end of the word.

When you work with accessibility in mind, it will improve the experience of everyone who uses your website.

A great website should be accessible to as many people as possible, regardless of their hardware, software, language, location, or ability. Many companies and developers are not sure why they should invest their time to make their website and digital content accessible. But, it is important and actually can be seen as a competitive advantage for their business. Here are some of the reasons:

  1. Leads to good practices

    Resulting in semantic and better mark-up for SEO.
  2. Helps to avoid legal concerns

    In the year 2000, The Sydney Organizing Committee for the Olympic Games has to paid AUD 20,000 because the official website failed to supply adequate accessibility to the blind user. Read the full story here.
  3. Enhance your brand

    Implementing accessibility will increase the positive image of a company with their diversity and inclusion efforts.
  4. Extend market reach

    1.3 billion people (15% of the world’s population) with a disability and people with low bandwidth connections, older technologies, and temporary disability (like losing their glasses) will be able to use your website.

Disabilities can affect people in different ways. In this article, we will discuss web accessibility for three types of disability: vision disability, hearing impairment, and mobility and cognition disability.

Vision disability

Photo by Edi Libedinsky on Unsplash

Most of the web content and navigation are designed for visual interaction. So, I believe your website already has a beautiful and functional design. However, to offer accessibility for people with various types of visual disabilities, you need to take a couple of things into considerations.

  1. Color blindness

    The most common example is the blindness of red and green. There is also a case of monochrome vision. Check the contrast of your website when designing by desaturating it.
  2. Poor eyesight

    Maybe you often see people with poor eyesight squinting their eyes when reading an article on their phone or computer. So, to make a better experience for them, avoid using fixed size for text, adjust the size according to the screen size. Give users the option to resize the text as well.
  3. Low vision

    Low vision conditions like cataracts and glaucoma can cause a blurry-vision. Make sure your website works with the screen-reader app. For instance, by using captions or alternate attribute (alt) for your HTML image element. To simulate the low vision situation, you can add a blur filter to your website content. At this point, you won’t be able to see the content of your website. Now, try to use the screen-reader app to navigate your site by hitting cmd f5 to enable the built-in VoiceOver tool for Mac or download the NVDA VoiceOver tool for Windows.

Other things you can do to improve the visual accessibility of your website are using regular text instead of an image as text, and not using scrolling/running and flashing text.

Hearing impairment

Photo by Christian Wiediger on Unsplash

If you have audio-visual contents on your website, try to add a visual aid. Considers the following:

  • For podcast content, you can add a transcript.
  • For video content, you can add both subtitles and a transcript.
  • You should also consider setting the audio levels carefully and consistently.
  • Allow for mono audio, if possible, to support users with a hearing aid device.

Mobility and cognition disability

Photo by Kaitlyn Baker on Unsplash

There are several types of mobility and cognition disabilities, such as Cerebral Palsy, Stroke, and Parkinson. To make your website accessible to people with these disabilities, you should:

  • Provide a keyboard-friendly content. Make sure your user can access the entire content of your websites using a keyboard. If possible, support an alternative to handle drag-and-drop or any gestural input.
  • Provide enough time for users to complete the action (read or click) for slider/carousel type content.
  • Minimize the number of steps to accomplished a task by adding keyboard shortcuts, simplifying the UI.
  • Provide a large clickable area.

Depending on the target audiences of your website and where you live, there will be a couple of different guidelines you can adopt.

  1. WCAG (Web Content Accessibility Guidelines)

    WCAG is the international accessibility guidelines from W3C WAI (Web Accessibility Initiative).
  2. US Rule Section 508 standards

    If you work with the US federal agency, you should follow these standards as well.

WCAG 2.0 has been gaining traction globally as the go-to web accessibility requirements. Most likely because it was created by the world’s experts, the documents are well organized, and it covers a wide range of disabilities.

Note: There is a newer version which is WCAG 2.1 that was published on 5 June 2018. It will extends WCAG 2.0 to 13 guidelines and 78 success criteria. I will update the list to WCAG 2.1 later. Please wait for the update.

WCAG 2.0 was published on 11 December 2008. It has four principles as the foundation for web accessibility:

  1. Perceivable

    Users must be able to perceive the information being presented.
  2. Operable

    Users must be able to operate the interface.
  3. Understandable

    Users must be able to understand the information as well as the operation of the user interface.
  4. Robust

    Users must be able to access the content as technologies advance.

Across these principles, there are 12 guidelines and 61 testable success criteria provided to allow WCAG 2.0 to be used where requirements and conformance testing are necessary, such as in design specification, purchasing, regulation, and contractual agreements. In order to meet the needs of different groups and different situations, three levels of conformance are defined:

  1. Level A

    Most basic web accessibility features
  2. Level AA

    Biggest and most basic common barriers
  3. Level AAA

    The highest level of accessibility

My recommendation for the conformance level you should meet on your website is Level AA, which satisfies all the Level A and Level AA success criteria. The checklist below is designed to fulfill this level.

You can see the checklist here.

If you already have a Notion account, you can duplicate the page to check the list and adjust it to your needs.

For the official list of WCAG 2.0 requirements, click here.


Computer games are a big deal: they are part of our culture, they provide and can promote social inclusion, they can educate (from encouraging and supporting player creativity through making modifications and new game levels, to being used as teaching aids) and they are a hugely popular means of recreation.

Just as with other walks of life, accessibility in computer games — and, importantly, their surrounding communities of online play, modifications and level design — is something for which we should strive, and many developers are. Historically, games have been regarded as a very hard accessibility problem to solve. It’s true there are some compelling challenges, but huge strides have been made, with the potential for game accessibility to become the norm.

In the UK, the games industry is significantly bigger than the film and music industries combined. In the US, the games industry is as big as Hollywood, but it doesn’t necessarily take huge resources to provide accessibility, and a significant number of indie developers and studios have been working on various accessibility features, from remappable controls, to variable font size and subtitles, with great success.

What are accessible games?

Some of the earliest computer games were fairly accessible by nature: interactive fiction takes advantage of the greatest rendering hardware known — the human brain — to create immersive and compelling worlds in which the player could explore, vanquish monsters, maybe nick a bit of treasure, and generally save the day. However, it’s important to consider that they do require good reading skills, and for their players to be able to type!

Text-based games reached their peak of popularity from the 1970s to the 1980s, though online variants, such as multi-user dungeons (MUDs), were popular into the 1990s. However, graphics pervaded every genre, including adventures, by this time. Most games had graphical user interfaces (GUIs), and it was increasingly common for them to require precise timing and deft cognitive and motor skills from their players. Whilst graphical games reached popularity in the late 1970s, these features and expectations were now the only mainstream option. Barriers to entry were created, either due to a disability or situational impairment of the otherwise-gamer. Games became challenging, even in ways they didn’t intend to be.

But, as game accessibility consultant Ian Hamilton points out, games have to be challenging in order to be rewarding experiences for their players. So, does this mean accessible games are a pipe dream? Of course not! There are many reasons why people may be unable to play games, but there are also many ways to present and interact with game worlds.

From the early days of computer gaming, there have been efforts to make games accessible, including specialist controller hardware such as sip/puff devices (some made by mainstream games companies) and assistive hardware and software features such as speech synthesis to assist blind people. You can find some excellent examples documented on the Accessible Gaming History Exhibit page at OneSwitch (specifically, check out the Accessible Gaming Displays PDF).

Some of the early specifically-made accessible games fall under the broader category of “Audiogames” (as opposed to video games) — these are games specifically designed with sound as the main means of expressing their world to the player. Some very imaginative, immersive and well-respected games were created by small studios, even one-person companies, many specifically for gamers who are blind (e.g. Monkey Business, Grizzly Gulch, and Chillingham). Others, such as Papa Sangre and The Nightjar were designed for all, with the high-tech audio engines and lack of video adding to the atmosphere of the game, and achieving significant mainstream attention at the time. These are great games, and fill a vital role. In parallel, however, the goal of accessibility in the majority of, if not all, games is an alluring one.

In recent years we’ve seen the benefits of much hard awareness-raising work by organisations such as the IGDA Game Accessibility Special Interest Group and many others, as well as corresponding effort from developers to make their games more accessible:

Likewise, an increasing number of games, from indie titles to triple-A blockbusters, are being released with at least some accessibility features, such as making important game areas easier to distinguish (as in FTL: Faster Than Light), a variable font size for their user interfaces, or controller button remapping (as in Uncharted 4).

But accessibility features don’t only help people who experience a permanent disability; some accessibility features are more commonly used than you might think. In Assassin’s Creed Origins, 60% of players turned subtitles on, so in the following game Ubisoft enabled them by default, and 95% of Assassin’s Creed Odyssey players left subtitles on.

So-called situational disabilities may be particularly prevalent for mobile/casual gaming, where players may be in a bright, busy or noisy environment, or not be able to use both hands to interact with the game.

Content accessibility

A great deal can be achieved with careful attention to content design. Games, more than web sites or apps, are all about their content. Simple, but fundamental techniques, like ensuring that information conveyed using colour is also conveyed by other means, such as shape, can have a profound impact for many people, even those who don’t regard themselves as having a disability for example. (Some great examples of using more than colour can be found on the Game Accessibility Guidelines site.)

For example, the following two symbols differ in both shape and colour, thus providing two ways to tell them apart. In a puzzle game, this can empower and include significantly more players than if colour alone had been used as the differentiator.

The key thing here is that this is accessible without the user having to turn on an accessibility setting — thus promoting inclusion out-of-the-box.

Spatial audio can provide surprisingly rich information to the player on the environment in which they’re in (a giant echoing chasm, or in tight quarters on a spaceship), and where they should explore. Attentive audio design really can afford accessibility SightlessKombat is a Killer Instinct player who rose to the top tier of players, despite not being able to see. However, access to the games’ user interfaces can still pose problems — most games are unable to interface with assistive technologies such as screen-readers. In many cases, blind gamers have to learn the buttons to press in order to navigate through the user interface to get into the game.

Of course, using spatial audio is an enabler for some people, but others (whether they be in a noisy environment, or perhaps have trouble hearing the game) may struggle to get the most out of it. Visual cues can also be used to convey information that is also provided through sound. Examples of this can be found in Half-Life, which uses visual effects to indicate the direction from which the player is taking damage, and “Everybody’s gone to the rapture”, which can visually highlight objects emitting relevant audio.

Everybody’s gone to the rapture can use visual patterns (in this case concentric circles) to highlight objects in the scene
(Example from the Game Accessibility Guidelines)

There are several sets of guidelines to which game developers can refer for help and advice on content design decisions that can afford accessibility to various different groups of people — check out the references section at the end for more info.

What’s missing?

Sometimes it’s not possible to provide content to cater for all situations. (In fact this is partly why closed captions were introduced to the Source engine: to allow the games to be marketed in areas where the developers didn’t have the resources to provide full localised character voice recordings, as recounted in the GamesCC Interview with Marc Laidlaw from Valve.)

Even if we wanted to support all possible choices users might have for reading the game’s user interface and using their preferred input devices, then we’d have to provide the following content…

  • Audio for every UI element…
  • …in several languages
  • …at several speeds
  • Make sure it’s navigable with a keyboard…
  • …and a mouse
  • …and works with a controller
  • …and with a single switch

There’s also the fact that sometimes, content comes not from the game’s developers directly, but from other players. This could include communications from other players (or maybe even procedurally-generated content from the game, for which pre-recording isn’t possible). It’s vital that people with disabilities are able to take part in such communication, and it is also now a legal requirement in the U.S. that communications functions in games (including the user interfaces necessary to reach them) are accessible (Ian Hamilton’s ’blog has more info on the 21st Century Communications and Video Accessibility Act (CVAA)).

Whilst content is essential to the game experience and ensuring it’s compelling and enjoyable, it can only take us so far. The player still needs to be able to navigate that content, particularly the game’s user interface, and understand and interact with it in a way that works for them. With websites and apps, users have access to various tools that can provide them with such access…

Assistive technologies and user interface accessibility

Screen-readers, screen-magnifiers, alternative modes of visual and auditory presentation and the ability to use different input devices are common in the mainstream world of desktop and mobile websites and apps. These assistive technologies are able to interpret information coded into websites and apps, and expose that to their users, e.g. by speaking that content, or expressing it as Braille, or in larger text, for example.

When native apps use standard widgets/controls provided by the Operating System (OS), they’re accessible because those controls automatically come with the required accessibility information. The screen-reader (for example) sits between the OS and apps, and can query for this information.


Accessibility layer → Assistive technology

Operating system

But games, in order to present a distinct and self-contained identity, and to promote immersion and entertainment, almost always use custom user interface elements that are entirely graphical in nature (even words on the screen are rendered as pixels ultimately, and the underlying text is not exposed to the Operating System).

Some consoles and gaming platforms are beginning to provide accessibility APIs such as text-to-speech (e.g. Xbox Narrator), which is excellent, though it’s still early days for such features, and they’re not available on all platforms.

The Unreal and Unity engines are those on which the majority of new games are based, and Unreal has recently started supporting preliminary screen-readers directly, with support from Unity expected to follow. This is excellent news for the industry, and is the path that most games will likely use towards improved accessibility in future. In parallel I have been wondering if all the existing infrastructure we have in browsers might help us bridge the user interface gap mentioned above, and support accessibility when games are delivered via the web, and by any engine…

The web and user interface accessibility

Assistive technologies work in a similar way with web sites and web apps as with native apps. The browser provides an accessibility tree that exposes various properties of the elements in the page’s Document Object Model (DOM) tree (such as the types of controls they represent, or their text content), mirroring its structure. The accessibility tree is then picked up by assistive technologies.

Web site/app

Accessibility tree → Assistive technology


The best (simplest) way to provide accessibility on the web is to use the standard HTML elements for the job. Using the standard HTML elements automatically brings the needed accessibility information (the purpose of the element; its content; its label, state and value, if it’s a form control). The “native” HTML elements also provide accessible keyboard handling by default, such as Space to activate a button and Enter to follow a link.

The following code shows a standard button, link and “landmark region” demarcating where the primary content on the page (which makes it easy for screen-reader users to find).

Sazerac recipe

This is important.

However, if we’d used elements with no inherent meaning, no accessibility information would be there to convey. This sometimes happens when web developers make custom controls, instead of using the native elements. However, we could add the semantics using ARIA attributes. This fills in the gaps in the accessibility tree for assistive technologies. (Though that’s all it does, so keyboard handling code would need to be added manually to fill in that which would’ve been provided by the native elements above.)

The following code is semantically equivalent to the native HTML elements given above.

Sazerac recipe

Assistive technologies (e.g. screen-readers) can pick up on these cues, but how is this relevant to games?

Web game user interface accessibility proof-of-concept

Somewhere in the code behind the game, the intent of various user interface controls, and the text displayed, is present in a form that could be made accessible. The challenge is how to bridge from this information to players’ assistive technologies.

Many games these days, especially educational games are developed for the web, or web-like platforms (Wikipedia article on Web Games). In addition, the technologies exist to compile native code into a format that can be run efficiently in the browser: WebAssembly. This technology can be used to achieve near-native speeds — in fact one of the early prototypes was running the then-latest Unreal game engine in the browser! (Unreal Engine 3 in-browser; Unreal Engine 4 in-browser).

Instead of compiling a game to run natively on the computer’s hardware…

Drawn diagram of .C code passing through a compiler to a chip
Source code, e.g. in C, is compiled directly to a form that will run on the computer’s hardware

The native code can be compiled to WebAssembly binary format (a “.wasm” file) and run in a browser alongside existing JavaScript code…

Drawn diagram of code passing through multiple steps to a browser with javascript
Source code, e.g. in C or Rust, is compiled to WebAssembly and then run in a browser alongside the general accessibility library (JavaScript code that creates and manages the proxy UI elements)

The browser gives us a ready-made opportunity to expose the accessibility information. It works as follows…

  • A library of JavaScript code sits in the browser and provides a simple API to create HTML elements that match the visual-but-not-semantic user interface controls in the game.
  • We add a small amount of accessibility code to the existing native game’s source code, which is included only when compiling to WebAssembly. This provides the information to the JavaScript in the browser to create the proxy UI elements.
  • The JavaScript accessibility layer code moves focus around the proxy objects, in synch with the focus management that the game is doing visually, in response to the player’s inputs. This causes the player’s screen-reader to announce the proxy widgets at the same time the in-game widgets are displayed on-screen.

Thus, it seems like the game’s user interface has been made accessible, as it is conveyed by the user’s screen-reader. Here are some screengrabs of the system in action, with the rendered game on the left and the proxy UI elements (which would normally be visually hidden) on the right…

Screenshot of a game menu next to an unstyled HTML rendering of the same menu.
Example game main menu, featuring links to “New Game”, “Options” and “Help” menus, and an “Exit” button, with the first item in both the rendered game menu and the proxy UI area focused. All of the options are grouped in a fieldset element with a legend of “Main Menu”.

When the user presses the down arrow, the next menu item is highlighted in-game, and the next proxy button element is focused behind the scenes, causing the player’s screen-reader to announce the change.

A screenshot of the same game menu and unstyled HTML fields
The same image as above, with the second item, “Options”, focused

In theory, issues such as focus handling and keyboard interaction should be fairly easy to solve, even if they are not accessible out-of-the-box, as the game UI has to be operable as-is, and usually this is supported by the keyboard (or a game controller, which could be emulated by a keyboard within the host OS). The main goal of the in-browser accessibility layer is to create the proxy objects for the UI that the user’s assistive technologies can understand.

The figures above show the use of links in the proxy UI area to represent sub-menus and a button to represent an immediate action. Input controls are required for a fully interactive UI too. The following figures demonstrate custom rendered controls that map to textboxes and sliders, with their labels appropriately associated.

Screenshot of another styled game menu next to the same controls in unstyled HTML
Player options screen showing textboxes for specifying the player’s name and team name
Screenshot of a third styled game menu next to the same controls in unstyled HTML
Sound options screen showing slider/range controls proxying the custom volume controls used in-game

Potential efficiency improvements

With this approach, real DOM elements must be created on the host web page. Whilst they can be hidden, we know they’re there. Also, the code used to create them is JavaScript, which is slower than the WebAssembly code and the browser’s internal APIs that are called to actually create and expose these proxy elements to assistive technologies.

There’s an upcoming standard called Accessibility Object Model (AOM) that provides for much more fine-grained control over the accessibility tree exposed by the browser. In fact, it removes the need for there to be real elements in the DOM, so this could be bypassed. What’s more: the Accessibility Object Model APIs are implemented in the browser’s (native) code, and are thus more efficient. On top of all of this, there’s a new method for WebAssembly code to directly call into the APIs provided by the browser, bypassing the need for JavaScript completely. This could make the whole UI proxying process vastly more efficient.

There is a consequence of this, however: the code to manage the Accessibility Object Model would have to be moved into the WebAssembly sphere, meaning that it would have to be provided by the game — or, most likely, the engine/middleware being used. This is not really a problem, as it makes sense for an engine to provide this code in a real-world scenario. Effectively decreasing the distance between the game developer and the accessibility code means there’d likely be opportunities to make the authoring tools more supportive of creating accessible user interfaces. For example, a lot could potentially be automated.

Next steps

These explorations demonstrate that, for games compiled to the web, it’s possible to make use of people’s existing assistive technologies such as screen-readers to expose any accessibility information that the game might provide.

Native games should use the accessibility support forming in the main game engines, but in the cases where games are delivered via the web, or are based on different technology stacks, this approach may be of help. It certainly demonstrates just how far browsers and assistive technologies have come in terms of performance and capability in recent years. There are two areas I’m continuing to explore…

  • Adapting existing in-game GUIs and games to use this approach. By doing this, a standard and minimal “accessibility layer” could be created that could be adopted to convey UI semantics to the browser, with minimal intervention from the developer.
  • Investigating how use of the upcoming Accessibility Object Model (AOM) standard might make things more efficient, and any other possibilities it might open up.

I’ll be talking about the latest developments in this work at the CSUN Accessibility conference in March 2020 — if you’re going to be in town, pop in and say hello, and if you can’t make it, then check out The Paciello Group ’blog, where we’ll post the slides afterwards.


Thanks to The Paciello Group for supporting my W3C membership and attendance of the W3C Workshop on Web Games. Thanks also to the Active Game Accessibility research project and to the W3C Accessible Platform Architectures Working Group for contributing to the position paper we submitted at that workshop, and thanks to the workshop attendees for their interest and advice.

Reference information

Demos and further reading around the article



It’s awesome to build sites that are inclusive and accessible to everyone. There are at least six key areas of disability we can optimize for: visual, hearing, mobility, cognition, speech and neural. Many tools and resources can help here, even if you’re totally new to web accessibility.

Over 1 billion people live with some form of disability. You might have been in a loud room at some point trying to hear the conversation around you or in a low-lighting condition trying to find something in the dark. Do you remember the frustration you felt with that circumstance? Now, imagine if that temporary condition was permanent. How different would your experience on the web be?

To be accessible, sites need to work across multiple devices with varying screen-sizes and different kinds of input, such as screen readers. Moreover, sites should be usable by the broadest group of users, including those with disabilities. Here are a sample of just a few disabilities your users may have:

Vision Hearing Mobility
– Low vision

– Blind

– Color blind

– Cataracts

– Sun glare
– Hard of hearing

– Deaf

– Noise

– Ear infection
– Broken arm

– Spinal cord injury

– Limited dexterity

– Hands full
Cognitive Speech Neural
– Learning disabilities

– Sleepy or distracted

– Migraine

– Autism

– Seizure
– Ambient noise

– Sore throat

– Speech impediment

– Unable to speak
– Depression


– Bipolar

– Anxiety

Visual issues range from an inability to distinguish colors to no vision at all.

  • Ensure a minimum contrast ratio threshold is met for text content.

  • Avoid communicating information using solely color and ensure that all text is resizable.

  • Ensure all user interface components can be used with assistive technologies such as screen readers, magnifiers and braille displays. This entails ensuring that UI components are marked up such that accessibility APIs can programmatically determine the role, state, value and title of any element.

Tip: Inspect element in Chrome, Edge and Firefox DevTools displays a tooltip of CSS properties that includes a quick check for color contrast ratio.

Hovering over an element you are inspecting will display a summary including color contrast ratio

I personally live with low-vision and am embarrassed to say, I’m that person that always zooms in on sites, their DevTools and terminal. While supporting zoom is almost never at the top of anyone’s list, optimizing for low-vision users is always appreciated… ?

Hearing issues mean a user may have issues hearing sound emitted from a page.

  • Make the content understandable using text alternatives for all content that is not strictly text.

  • Ensure you test that your UI components are still functional without sound.

VoiceOver for Mac being used to navigate in Safari

Mobility issues can include the inability to operate a mouse, a keyboard or touch-screen.

  • Make the content of your UI components functionally accessible from a keyboard for any actions one would otherwise use a mouse for.

  • Ensure pages are correctly marked up for assistive technologies; these users may use technologies such as voice control software and physical switch controls, which tend to use the same APIs as other assistive technologies like screen readers.

Cognitive issues mean a user may require assistive technologies to help them with reading text, so it’s important to ensure text alternatives exist.

  • Be mindful when using animations. Avoid a visual presentation that is repetitive or flashing as this can cause some users issues.

The prefers-reduced-motion CSS media query allows you to limit animations and autoplaying videos for users who prefer reduced motion.

If the user expressed a preference for reduced motion, don't use animations on buttons.
@media (prefers-reduced-motion: reduce) {
  button {
    animation: none;
  • Avoid interactions that are timing-based.

This may seem like a lot of bases to cover, but we’ll walk through the process for assessing and then improving the accessibility of your UI components.

Tip: Some great accessibility do’s and don’ts digital posters are available by the accessibility team for spreading awareness of best practices in your team:

Thumbnail of accessibility posters

Are your UI components accessible?

Summary (tl;dr)

When auditing your page’s UI components for accessibility, ask yourself:

  • Can you use your UI component with the keyboard only? Does it manage to focus and avoid focus traps? Can it respond to the appropriate keyboard events?

  • Can you use your UI component with a screen reader? Have you provided text alternatives for any information which is presented visually? Have you added semantic information using ARIA?

  • Can your UI component work without sound? Turn off your speakers and go through your use cases.

  • Can it work without color? Ensure your UI component can be used by someone who cannot see colors. A helpful tool for simulating color blindness is a Chrome extension called SEE, (try all four forms of color blindness simulation available). You may also be interested in the Daltonize extension which is similarly useful.

  • Can your UI component work with high-contrast mode enabled? All modern operating systems support a high contrast mode. High Contrast is a Chrome extension available that can help here.

Native controls (such as