About The Author

Rachel Andrew is not only Editor in Chief of Smashing Magazine, but also a web developer, writer and speaker. She is the author of a number of books, including …
More about

In a new series, Rachel Andrew breaks down the CSS Grid Layout specification. This time, we take a detailed look at what happens when you create a grid container and the various properties that can be applied to the container to shape your grid.

This is the start of a new series here at Smashing Magazine concentrating on CSS Grid Layout. While Grid has been available in browsers since 2017, many developers won’t have had a chance to use it on a project yet. There seem to be a lot of new properties and values associated with CSS Grid Layout. This can make it seem overwhelming. However, quite a lot of the specification details alternate ways to do things, meaning that you don’t have to learn the entire spec to get started. This series aims to take you from grid novice to expert — with lots of practical usage tips along the way.

This initial article will cover what happens when you create a grid container and the various properties that you can use on the parent element to control that grid. You will discover that there are several use cases that are fulfilled only with the properties that you apply to the grid container.

In this article, we will cover:

  • Creating a grid container with display: grid or display: inline-grid,
  • Setting up columns and rows with grid-template-columns and grid-template-rows,
  • Controlling the size of implicit tracks with grid-auto-columns and grid-auto-rows.

Overflow And Data Loss In CSS

CSS is designed to keep your content readable. Let’s explore situations in which you might encounter overflow in your web designs and how CSS has evolved to create better ways to manage and design around unknown amounts of content. Read article →

Creating A Grid Container

Grid, like Flexbox, is a value of the CSS display property. Therefore to tell the browser that you want to use grid layout you use display: grid. Having done this, the browser will give you a block-level box on the element with display: grid and any direct children will start to participate in a grid formatting context. This means they behave like grid items, rather than normal block and inline elements.

However, you may not immediately see a difference on your page. As you haven’t created any rows or columns, you have a one-column grid. Enough rows are being generated to hold all of your direct children, and they are displaying one after the other in that single column. Visually they look just like block elements.

You will see a difference if you had any string of text, not wrapped in an element, and a direct child of the grid container, as the string will be wrapped in an anonymous element and become a grid item. Any element which is normally an inline element, such as a span, will also become a grid item once its parent is a grid container.

The example below has two block-level elements, plus a string of text with a span in the middle of the string. We end up with five grid items:

  • The two div elements,
  • The string of text before the span,
  • The span,
  • The string of text after the span.

See the Pen Grid Container: Direct children and strings of text become grid items by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Container: Direct children and strings of text become grid items by Rachel Andrew (@rachelandrew) on CodePen.

If you inspect the grid using the Firefox Grid Inspector, you can see the five-row tracks that have been created for the items.

A single column grid with five rows
The Grid Inspector is useful to help you see how many rows have been created

You can also create an inline grid by using display: inline-grid; in this case, your grid container becomes an inline-level box. However, the direct children are still grid items and behave in the same way as grid items inside a block-level box (it is only the outer display type). That is why the grid container behaves the way it does above when it is alongside other boxes on the page.

This next example has a grid followed by a string of text, as this is an inline-level grid, the text can display alongside it. Inline-level things do not stretch to take up all the space in the inline dimension in that way that block-level things do.

See the Pen Grid Container: inline-grid by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Container: inline-grid by Rachel Andrew (@rachelandrew) on CodePen.

Note: In the future, we will be able to better describe our layout by using display: block grid in order to create our block-level container, and display: inline grid to create an inline-level container. You can read about this change to the display specification in my article, “Digging Into The DIsplay Property: The Two Values Of Display”.

Columns And Rows

To get something that looks like a grid, we will need to add columns and rows. These are created using the grid-template-columns and grid-template-rows properties. These properties are defined in the spec as accepting a value called a track-list.

These properties specify, as a space-separated track list, the line names and track sizing functions of the grid. The grid-template-columns property specifies the track list for the grid’s columns, while grid-template-rows specifies the track list for the grid’s rows.

Some valid track-list values are as follows:

grid-template-columns: 100px 100px 200px; Creates a three-column grid: The first column is 100px, the second 100px, the third 200px.
grid-template-columns: min-content max-content fit-content(10em) Creates a three-column grid: The first column is the min-content size for that track, the second the max-content size. The third is either max-content unless the content is larger than 10em, in which case it is clamped to 10em.
grid-template-columns: 1fr 1fr 1fr; Creates a three-column grid using the fr unit. The available space in the grid container is divided into three and shared between the three columns.
grid-template-columns: repeat(2, 10em 1fr); Creates a four-column grid with a repeating pattern of 10em 1fr 10em 1fr as the track-list in the repeat statement is repeated twice.
grid-template-columns: repeat(auto-fill, 200px); Fills the container with as many 200px columns as will fit leaving a gap at the end if there is spare space.
grid-template-columns: repeat(auto-fill, minmax(200px, 1fr)); Fills the container with as many 200px columns as will fit then distributes the remaining space equally between the created columns.
grid-template-columns: [full-start] 1fr [content-start] 3fr [content-end] 1fr [full-end]; Creates a three-column grid: The first and third columns have 1 part each of the available space while the middle column has 3 parts. The lines are named by putting line names in square brackets.

As you can see there are many ways to create a track listing! Let’s have a look at exactly how these all work, with a few tips in terms of why you might use each one.

Using Length Units

You can use any length units, or a percentage to create your tracks. If the size of the tracks adds up to less than is available in the grid container, then by default the tracks will line up at the start of the container and the spare space will go to the end. This is because the default value of align-content and justify-content is start. You can space out the grid tracks, or move them to the end of the container using the alignment properties, which I explain in detail in my article “How To Align Things In CSS”.

See the Pen Grid Container: length units by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Container: length units by Rachel Andrew (@rachelandrew) on CodePen.

You can also use the keywords min-content, max-content and fit-content(). Using min-content will give you a track that is as small as it can be without causing overflow. Therefore, when used as a column size, the content will softly wrap wherever possible. The track becoming the size of the longest word in the column or largest fixed-size element.

Using max-content will cause the content to not do any soft-wrapping at all. In a column, any string of text will unwrap which may cause overflow.

The fit-content keyword can only be used by passing in a value. That value becomes the max that this track will grow to. Therefore, the track will act like max-content with the content unwrapping and stretching out until it hits the value you passed in. At that point, it will start wrapping as normal. So your track may be smaller than the value you pass in, but never larger.

See the Pen Grid Container: min-content, max-content, fit-content() by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Container: min-content, max-content, fit-content() by Rachel Andrew (@rachelandrew) on CodePen.

You can find out more about sizing in Grid and other layout methods in my article “How Big Is That Box? Understanding Sizing In CSS Layout”.

If you end up with tracks that take up more space than you have in your container, they will overflow. If you use percentages then, as with percentage-based float or flex layouts, you will need to take care that the total percentage is not more than 100% if you want to avoid overflow.

The fr Unit

Grid Layout includes a method that can save you calculating percentages for yourself — track sizing with the fr unit. This unit isn’t a length, and therefore can’t be combined with calc(); it is a flex unit and represents the available space in the grid container.

This means that with a track-list of 1fr 1fr 1fr; the available space is divided into three and shared evenly between the tracks. With a track-list of 2fr 1fr 1fr, the available space is divided into four and two parts are given to track one — one part each to tracks two and three.

See the Pen Grid Container: fr by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Container: fr by Rachel Andrew (@rachelandrew) on CodePen.

Something to watch out for is that what is being shared out by default is available space which is not the total space in the container. If any of your tracks contain a fixed-size element or a long word that can’t be wrapped, this will be laid out before the space is shared out.

In the next example, I removed the spaces between the words of ItemThree. This made a long unbreakable string so space distribution happens after the layout of that item has been accounted for.

See the Pen Grid Container: fr with larger content by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Container: fr with larger content by Rachel Andrew (@rachelandrew) on CodePen.

You can mix the fr unit with fixed length tracks, and this is where it becomes very useful. For example, you could have a component with two fixed-sized columns and a center area that stretches:

See the Pen Grid Container: mixing fr units and fixed-size tracks by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Container: mixing fr units and fixed-size tracks by Rachel Andrew (@rachelandrew) on CodePen.

You can have a component with one track set to fit-content(300px) and the other to 1fr. This makes for a component that can have something smaller than 300px in the first track, in which case it only takes the space it needs and the fr unit expands to take up the rest of the space.

If you add something larger (such as an image with max-width: 100%), the first track will stop growing at 300px and the fr unit takes the rest of the space. Mixing the fr unit with fit-content is a way to make some very flexible components for your site.

See the Pen Grid Container: mixing fr and fit-content() by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Container: mixing fr and fit-content() by Rachel Andrew (@rachelandrew) on CodePen.

The repeat() Function

Using repeat() in your track-list can save typing out the same value or values over and over again. For example the following two lines are the same:

grid-template-columns: 1fr 1fr 1fr 1fr 1fr 1fr 1fr 1fr 1fr 1fr 1fr 1fr;
grid-template-columns: repeat(12, 1fr);

When using repeat() the value before the column is the number of times to repeat the track-list that comes after the comma. That track-list can be multiple values. This means you can repeat a pattern of tracks.

You can use the repeat() function for part of a track-list. For example, the following line would give you a 1fr track, 3 200px tracks, and a final 1fr track.

grid-template-columns: 1fr repeat(3,200px) 1fr

In addition to a number before the comma to indicate a fixed number of times to repeat the pattern, you can also use the keywords auto-fill or auto-fit. Using one of these keywords means that instead of a fixed number of tracks, your grid container will be filled with as many tracks as will fit.

See the Pen Grid Container: auto-fill by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Container: auto-fill by Rachel Andrew (@rachelandrew) on CodePen.

Using a fixed-length unit means that, unless the container is able to be exactly divided by that size, you will end up with some spare space remaining. In the example above my container is 500px wide, so I get two 200px tracks plus space at the end.

We can use another grid function to make the value a minimum, with any spare space distributed across all of the tracks. The minmax() function takes a minimum and a maximum size. With a minimum of 200px and a max of 1fr, we get as many 200px tracks as will fit and because the max is 1fr, which we already know will share out the space evenly, the extra is distributed across the tracks.

See the Pen Grid Container: auto-fill and minmax() by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Container: auto-fill and minmax() by Rachel Andrew (@rachelandrew) on CodePen.

I mentioned there are two possible keywords: auto-fill and auto-fit. If you have enough content to fill the first row of cells, then these will behave in exactly the same way. If, however, you do not (e.g. if we remove all but one item inside the container above), then they behave differently.

Using auto-fill will maintain the available track sizing even if there is no content to go into it.

See the Pen Grid Container: auto-fill and minmax() with one item by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Container: auto-fill and minmax() with one item by Rachel Andrew (@rachelandrew) on CodePen.

If, instead, you use auto-fit, the empty tracks will be collapsed:

See the Pen Grid Container: auto-fit and minmax() with one item by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Container: auto-fit and minmax() with one item by Rachel Andrew (@rachelandrew) on CodePen.

By using the Firefox Grid Inspector, you can see that the tracks are still there, but have been collapsed to zero. The end line of our grid is still line 3 as we can fit two tracks.

A single grid item fills the container, the grid inspector highlights the column lines
The track is still there but collapsed

Named Lines

My final example above used the named lines approach. When using Grid. you always have line numbers, however, you can also name the lines. Lines are named inside square brackets. You can have multiple names for one line; in that case, a space separates them. For example, in the following track-list, all of my lines have two names.

grid-template-columns: [main-start sidebar-start] 1fr [sidebar-end content-start] 4fr [content-end main-end]

You can name your lines anything that you like, except the word span as that is a reserved word due to being used when placing items on the grid.

Note: In the next article in this series, I’ll be talking more about line-based placement and how named lines are used. In the meantime, read my article on “Naming Things in CSS Grid Layout” to help you learn more on the topic.

The Explicit vs The Implicit Grid

When creating a grid using grid-template-columns and grid-template-rows with a track-list, you are creating what is referred to as the explicit grid. This is the grid you have defined which has the sizing you have chosen for each track.

If you have more items than will fit, or place an item so it falls outside of the bounds of the grid you have created, Grid will create tracks in the implicit grid. These implicit tracks will be auto-sized by default. We saw this implicit grid in action when I declared display: grid on the parent element and grid created rows, one for each item. I didn’t define these rows, but as there were grid items, the row tracks were created to give them somewhere to go.

You can set a size for implicit rows or columns by using the grid-auto-rows or grid-auto-columns properties. These properties take a track-listing, so if you want all implicit columns to be at least 200 pixels tall but grow if there is more content, you could use the following:

grid-auto-rows: minmax(200px, auto)

If you want the first implicit row to be auto-sized, and the second to be min-content sized, and so on (until all of the grid items have been accommodated), you can pass in multiple values:

grid-auto-rows: auto 100px

See the Pen Grid Container: grid-auto-rows by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Container: grid-auto-rows by Rachel Andrew (@rachelandrew) on CodePen.

Using A Grid With Auto-Placement

Creating a grid (and allowing the browser to auto-place items) gets you a long way in terms of the useful patterns you can achieve. We have not yet looked at placing items on the grid, but many layouts that make use of Grid don’t do any placement. They simply rely on placing the items in source order — one in each grid cell.

If you are new to CSS Grid, then playing with different track sizes and seeing how the items place themselves into the cells you create is a great way to start.

Smashing Editorial(il)


About The Author

Tris is a Creative Technologist at Greenwood Campbell, an Award Winning Digital Agency. He specializes in innovation in the technology sector, to connect …
More about

Voice assistants are hopping out of emerging tech and into everyday life. As a front end developer, you already have the skills to build one, so let’s dive into the platforms.

Over the past decade, there has been a seismic shift towards conversational interfaces. As people reach ‘peak screen’ and even begin to scale back their device usage with digital wellbeing features being baked into most operating systems.

To combat screen fatigue, voice assistants have entered the market to become a preferred option for quickly retrieving information. A well-repeated stat states that 50% of searches will be done by voice in year 2020. Also, as adoption rises, it’s up to developers to add “Conversational Interfaces” and “Voice Assistants” to their tool belt.

Designing The Invisible

For many, embarking on a voice UI (VUI) project can be a bit like entering the Unknown. Find out more about the lessons learned by William Merrill when designing for voice. Read article →

What Is A Conversational Interface?

A Conversational Interface (sometimes shortened to CUI, is any interface in a human language. It is tipped to be a more natural interface for the general public than the Graphic User Interface GUI, which front end developers are accustomed to building. A GUI requires humans to learn its specific syntaxes of the interface (think buttons, sliders, and drop-downs).

This key difference in using human language makes CUI more natural for people; it requires little knowledge and puts the burden of understanding on the device.

Commonly CUIs comes in two guises: Chatbots and Voice Assistants. Both have seen a massive rise in uptake over the last decade thanks to advances in Natural Language Processing (NLP).

Understanding Voice Jargon

(Large preview)
Keyword Meaning
Skill/Action A voice application, which can fulfill a series of intents
Intent Intended action for the skill to fulfill, what the user wants the skill to do in response to what they say.
Utterance The sentence a user says, or utters.
Wake Word The word or phrase used to start a voice assistant listening, e.g. ‘Hey google’, ‘Alexa’ or ‘Hey Siri’
Context The pieces of contextual information within an utterance, that helps the skill fulfill an intent, e.g. ‘today’, ‘now’, ‘when I get home’.

What Is A Voice Assistant?

A voice assistant is a piece of software capable of NLP (Natural Language Processing). It receives a voice command and returns an answer in audio format. In recent years the scope of how you can engage with an assistant is expanding and evolving, but the crux of the technology is natural language in, lots of computation, natural language out.

For those looking for a bit more detail:

  1. The software receives an audio request from a user, processes the sound into phonemes, the building blocks of language.
  2. By the magic of AI (Specifically Speech-To-Text), these phonemes are converted into a string of the approximated request, this is kept within a JSON file which also contains extra information about the user, the request and the session.
  3. The JSON is then processed (usually in the cloud) to work out the context and intent of the request.
  4. Based on the intent, a response is returned, again within a larger JSON response, either as a string or as SSML (more on that later)
  5. The response is processed back using AI (naturally the reverse – Text-To-Speech) which is then returned to the user.

There’s a lot going on there, most of which don’t require a second thought. But each platform does this differently, and it’s the nuances of the platform that require a bit more understanding.

(Large preview)

Voice-Enabled Devices

The requirements for a device to be able to have a voice assistant baked in are pretty low. They require a Microphone, an internet connection, and a Speaker. Smart Speakers like the Nest Mini & Echo Dot provide this kind of low-fi voice control.

Next up in the ranks is voice screen, this is known as a ‘Multimodal’ device (more on these later), and are devices like the Nest Hub and the Echo Show. As smartphones have this functionality, they can also be considered a type of Multimodal voice-enabled device.

Voice Skills

First off, every platform has a different name for their ‘Voice Skills’, Amazon goes with skills, which I will be sticking with as a universally understood term. Google opts for ‘Actions’, and Samsung goes for ‘capsules’.

Each platform has its own baked-in skills, like asking the time, weather and sports games. Developer-made (third-party) skills can be invoked with a specific phrase, or, if the platform likes it, can be implicitly invoked, without a key phrase.

Explicit Invocation: ”Hey Google, Talk to .”

It is explicitly stated which skill is being asked for:

Implicit Invocation: ”Hey Google, what is the weather like today?”

It is implied by the context of the request what service the user wants.

What Voice Assistants Are There?

In the western market, voice assistants are very much a three-horse race. Apple, Google and Amazon have very different approaches to their assistants, and as such, appeal to different types of developers and customers.

Apple’s Siri

Device Names: ”Google Home, Nest”

Wake Phrase: ”Hey Siri”

Siri has over 375 million active users, but for the sake of brevity, I am not going into too much detail for Siri. While it may be globally well adopted, and baked into most Apple devices, it requires developers to already have an app on one of Apple’s platforms and is written in swift (whereas the others can be written in everyone’s favorite: Javascript). Unless you are an app developer who wants to expand their app’s offering, you can currently skip past apple until they open up their platform.

Google Assistant

Device Names: ”Google Home, Nest”

Wake Phrase: ”Hey Google”

Google has the most devices of the big three, with over 1 Billion worldwide, this is mostly due to the mass of Android devices that have Google Assistant baked in, with regards to their dedicated smart speakers, the numbers are a little smaller. Google’s overall mission with its assistant is to delight users, and they have always been very good at providing light and intuitive interfaces.

Their primary aim on the platform is to use time — with the idea of becoming a regular part of customers’ daily routine. As such, they primarily focus on utility, family fun, and delightful experiences.

Skills built for Google are best when they are engagement pieces and games, focusing primarily on family-friendly fun. Their recent addition of canvas for games is a testament to this approach. The Google platform is much stricter for submissions of skills, and as such, their directory is a lot smaller.

Amazon Alexa

Device Names: “Amazon Fire, Amazon Echo”

Wake Phrase: “Alexa”

Amazon has surpassed 100 million devices in 2019, this predominantly comes from sales of their smart speakers and smart displays, as well as their ‘fire’ range or tablets and streaming devices.

Skills built for Amazon tend to be aimed at in skill purchasing. If you are looking for a platform to expand your e-commerce/service, or offer a subscription then Amazon is for you. That being said, ISP isn’t a requirement for Alexa Skills, they support all sorts of uses, and are much more open to submissions.

The Others

There are even more Voice assistants out there, such as Samsung’s Bixby, Microsoft’s Cortana, and the popular open-source voice assistant Mycroft. All three have a reasonable following, but are still in the minority compared to the three Goliaths of Amazon, Google and Apple.

Building On Amazon Alexa

Amazons Ecosystem for voice has evolved to allow developers to build all of their skills within the Alexa console, so as a simple example, I am going to use its built-in features.

(Large preview)

Alexa deals with the Natural Language Processing and then finds an appropriate Intent, which is passed to our Lambda function to deal with the logic. This returns some conversational bits (SSML, text, cards, and so on) to Alexa, which converts those bits to audio and visuals to show on the device.

Working on Amazon is relatively simple, as they allow you to create all parts of your skill within the Alexa Developer Console. The flexibility is there to use AWS or an HTTPS endpoint, but for simple skills, running everything within the Dev console should be sufficient.

Let’s Build A Simple Alexa Skill

Head over to the Amazon Alexa console, create an account if you don’t have one, and log in,

Click Create Skill then give it a name,

Choose custom as your model,

and choose Alexa-Hosted (Node.js) for your backend resource.

Once it is done provisioning, you will have a basic Alexa skill, It will have your intent built for you, and some back end code to get you started.

If you click on the HelloWorldIntent in your Intents, you will see some sample utterances already set up for you, let’s add a new one at the top. Our skill is called hello world, so add Hello World as a sample utterance. The idea is to capture anything the user might say to trigger this intent. This could be “Hi World”, “Howdy World”, and so on.

What’s Happening In The Fulfillment JS?

So what is the code doing? Here is the default code:

const HelloWorldIntentHandler = {

    canHandle(handlerInput) {

        return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest'

            && Alexa.getIntentName(handlerInput.requestEnvelope) === 'HelloWorldIntent';


    handle(handlerInput) {

        const speakOutput = 'Hello World!';

        return handlerInput.responseBuilder





This is utilizing the ask-sdk-core and is essentially building JSON for us. canHandle is letting ask know it can handle intents, specifically ‘HelloWorldIntent’. handle takes the input, and builds the response. What this generates looks like this:


    "body": {

        "version": "1.0",

        "response": {

            "outputSpeech": {

                "type": "SSML",

                "ssml": "Hello World!"


            "type": "_DEFAULT_RESPONSE"


        "sessionAttributes": {},

        "userAgent": "ask-node/2.3.0 Node/v8.10.0"



We can see that speak outputs ssml in our json, which is what the user will hear as spoken by Alexa.

Building For Google Assistant

(Large preview)

The simplest way to build Actions on Google is to use their AoG console in combination with Dialogflow, you can extend your skills with firebase, but as with the Amazon Alexa tutorial, let’s keep things simple.

Google Assistant uses three primary parts, AoG, which deals with the NLP, Dialogflow, which works out your intents, and Firebase, that fulfills the request, and produces the response that will be sent back to AoG.

Just like with Alexa, Dialogflow allows you to build your functions directly within the platform.

Let’s Build An Action On Google

There are three platforms to juggle at once with Google’s solution, which are accessed by three different consoles, so tab up!

Setting Up Dialogflow

Let’s start by logging into the Dialogflow console. Once you have logged in, create a new agent from the dropdown just below the Dialogflow logo.

Give your agent a name, and add on the ‘Google Project Dropdown’, while having “Create a new Google project” selected.

Click the create button, and let it do its magic, it will take a little bit of time to set up the agent, so be patient.

Setting Up Firebase Functions

Right, now we can start to plug in the Fulfillment logic.

Head on over to the Fulfilment tab. Tick to enable the inline editor, and use the JS snippets below:


'use strict';

// So that you have access to the dialogflow and conversation object
const {  dialogflow } = require('actions-on-google'); 

// So you have access to the request response stuff >> functions.https.onRequest(app)
const functions = require('firebase-functions');

// Create an instance of dialogflow for your app
const app = dialogflow({debug: true});

// Build an intent to be fulfilled by firebase, 
// the name is the name of the intent that dialogflow passes over
app.intent('Default Welcome Intent', (conv) => {
  // Any extra logic goes here for the intent, before returning a response for firebase to deal with
    return conv.ask(`Welcome to a firebase fulfillment`);

// Finally we export as dialogflowFirebaseFulfillment so the inline editor knows to use it
exports.dialogflowFirebaseFulfillment = functions.https.onRequest(app);


  "name": "functions",
  "description": "Cloud Functions for Firebase",
  "scripts": {
    "lint": "eslint .",
    "serve": "firebase serve --only functions",
    "shell": "firebase functions:shell",
    "start": "npm run shell",
    "deploy": "firebase deploy --only functions",
    "logs": "firebase functions:log"
  "engines": {
    "node": "10"
  "dependencies": {
    "actions-on-google": "^2.12.0",
    "firebase-admin": "~7.0.0",
    "firebase-functions": "^3.3.0"
  "devDependencies": {
    "eslint": "^5.12.0",
    "eslint-plugin-promise": "^4.0.1",
    "firebase-functions-test": "^0.1.6"
  "private": true

Now head back to your intents, go to Default Welcome Intent, and scroll down to fulfillment, make sure ‘Enable webhook call for this intent’ is checked for any intents your wish to fulfill with javascript. Hit Save.

(Large preview)

Setting Up AoG

We are getting close to the finish line now. Head over to the Integrations Tab, and click Integration Settings in the Google Assistant Option at the top. This will open a modal, so let’s click test, which will get your Dialogflow integrated with Google, and open up a test window on Actions on Google.

On the test window, we can click Talk to my test app (We will change this in a second), and voila, we have the message from our javascript showing on a google assistant test.

We can change the name of the assistant in the Develop tab, up at the top.

So What’s Happening In The Fulfillment JS?

First off, we are using two npm packages, actions-on-google which provides all the fulfillment that both AoG and Dialogflow need, and secondly firebase-functions, which you guessed it, contains helpers for firebase.

We then create the ‘app’ which is an object that contains all of our intents.

Each intent that is created passed ‘conv’ which is the conversation object Actions On Google sends. We can use the content of conv to detect information about previous interactions with the user (such as their ID and information about their session with us).

We return a ‘conv.ask object’, which contains our return message to the user, ready for them to respond with another intent. We could use ‘conv.close’ to end the conversation if we wanted to end the conversation there.

Finally, we wrap everything up in a firebase HTTPS function, that deals with the server-side request-response logic for us.

Again, if we look at the response that is generated:


  "payload": {

    "google": {

      "expectUserResponse": true,

      "richResponse": {

        "items": [


            "simpleResponse": {

              "textToSpeech": "Welcome to a firebase fulfillment"








We can see that conv.ask has had its text injected into the textToSpeech area. If we had chosen conv.close the expectUserResponse would be set to false and the conversation would close after the message had been delivered.

Third-Party Voice Builders

Much like the app industry, as voice gains traction, 3rd party tools have started popping up in an attempt to alleviate the load on developers, allowing them to build once deploy twice.

Jovo and Voiceflow are currently the two most popular, especially since PullString’s acquisition by Apple. Each platform offers a different level of abstraction, so It really just depends on how simplified you’re like your interface.

Extending Your Skill

Now that you have gotten your head around building a basic ‘Hello World’ skill, there are bells and whistles aplenty that can be added to your skill. These are the cherry on top of the cake of Voice Assistants and will give your users a lot of extra value, leading to repeat custom, and potential commercial opportunity.


SSML stands for speech synthesis markup language and operates with a similar syntax to HTML, the key difference being that you are building up a spoken response, not content on a webpage.

‘SSML’ as a term is a little misleading, it can do so much more than speech synthesis! You can have voices going in parallel, you can include ambiance noises, speechcons (worth a listen to in their own right, think emojis for famous phrases), and music.

When Should I Use SSML?

SSML is great; it makes a much more engaging experience for the user, but what is also does, is reduce the flexibility of the audio output. I recommend using it for more static areas of speech. You can use variables in it for names etc, but unless you intend on building an SSML generator, most SSML is going to be pretty static.

Start with simple speech in your skill, and once it is complete, enhance areas which are more static with SSML, but get your core right before moving on to the bells and whistles. That being said, a recent report says 71% of users prefer a human (real) voice over a synthesized one, so if you have the facility to do so, go out and do it!

(Large preview)

In Skill Purchases

In-skill purchases (or ISP) are similar to the concept of in-app purchases. Skills tend to be free, but some allow for the purchase of ‘premium’ content/subscriptions within the app, these can enhance the experience for a user, unlock new levels on games, or allow access to paywalled content.


Multimodal responses cover so much more than voice, this is where voice assistants can really shine with complementary visuals on devices that support them. The definition of multimodal experiences is much broader and essentially means multiple inputs (Keyboard, Mouse, Touchscreen, Voice, and so on.).

Multimodal skills are intended to complement the core voice experience, providing extra complementary information to boost the UX. When building a multimodal experience, remember that voice is the primary carrier of information. Many devices don’t have a screen, so your skill still needs to work without one, so make sure to test with multiple device types; either for real or in the simulator.

(Large preview)


Multilingual skills are skills that work in multiple languages and open up your skills to multiple markets.

The complexity of making your skill multilingual is down to how dynamic your responses are. Skills with relatively static responses, e.g. returning the same phrase every time, or only using a small bucket of phrases, are much easier to make multilingual than sprawling dynamic skills.

The trick with multilingual is to have a trustworthy translation partner, whether that is through an agency or a translator on Fiverr. You need to be able to trust the translations provided, especially if you don’t understand the language being translated into. Google translate will not cut the mustard here!


If there was ever a time to get into the voice industry, it would be now. Both in its prime and infancy, as well as the big nine, are plowing billions into growing it and bringing voice assistants into everybody’s homes and daily routines.

Choosing which platform to use can be tricky, but based on what you intend to build, the platform to use should shine through or, failing that, utilize a third-party tool to hedge your bets and build on multiple platforms, especially if your skill is less complicated with fewer moving parts.

I, for one, am excited about the future of voice as it becomes ubiquitous; screen reliance will reduce and customers will be able to interact naturally with their assistant. But first, it’s up to us to build the skills that people will want from their assistant.

Smashing Editorial(dm, il)


If we are going to style a checkbox for the future, it needs to be not only aesthetically pleasing but navigable with keyboard for screen readers. This is almost 2020, yo, let’s style like it is.

There are many cool demos out there of fancy checkboxes. These all include some version of hiding the OG checkbox and styling the label and/or pseudo elements. However, many of these cool checkboxes do not take accessibility into account. 

There is a marvelous man who created the https://100dayscss.com challenge and inspired me with day No. 66. I not only wanted to teach how this could be done, but how it could be done accessibly.

gif of https://100dayscss.com no. 66 challenge, a custom checkbox

Building the Custom Checkbox

gif of final animation

Disclaimer: There are many ways to accomplish this effect, I will be walking through just one, the way I thought up! Feel free to play around and come up with a custom way yourself. After all, that is the spirit of the 100 Day CSS Challenge!


Click to see the code in the steps below.

First, start the Codepen, and then proceed with the following:

  1. Build out the elements
    1. Hiding the checkbox
    2. Build checkbox with label
    3. Animate the checkbox on check/uncheck
  2. Build/position expression lines with spans.
  3. Animate the expression lines
  4. Test accessibility with ChromeVox
  5. Make things even fancier

Step 1: The Checkbox & Label

$cb: cubic-bezier(0.17, 0.67, 0.83, 0.67);
$checkbox-size: 1.5em;

input[type="checkbox"] {
  @extend .center;
  opacity: 0;
  pointer-events: none;

  &   label {
    @extend .center;
    width: $checkbox-size;
    height: $checkbox-size;
    border-radius: 50%;
    border: solid 2px white;
    cursor: pointer;
    box-shadow: 1px 1px 1px 1px rgba(3, 79, 32, 0.5);
    transition: box-shadow 0.4s $cb, background 0.4s $cb;

  &:checked   label {
    background-color: purple;
    box-shadow: 1px 1px 1px 1px rgba(3, 79, 32, 0.5), 0 0 0 5px white inset;

Here we are selecting and styling our checkbox and label for the normal and :checked states. We want the OG checkbox to disappear. We could use something like display: none; for this, however, that isn’t helpful for screen readers. So another way of accomplishing this is to give the checkbox opacity: 0; and pointer-events: none;.

Usually, this alternative disappearing treatment would be accompanied by visibility: none;. However, setting the visibility to none here actually disables toggling of the label with the keyboard (which for accessibility is no bueno). So we are going to leave the visibility line off.

We then are styling the label to look like this fancy version of a fill-in-bubble or checkbox:

screenshot of unchecked checkbox

It’s selected state on &:checked label looks like this:

screenshot of checked checkbox

After this, we move onto the next step: Build/position expression lines with spans.

Step 2: Expression Lines

Now we are going to add 12 spans with the class of expression-lines. We can (and do) accomplish this with four spans and pseudo elements in the final Codepen, so head there if you’d like to jump straight into the deep end. For now we are starting off simple!

Here we have a few variables ($distance, $height, $offset) that we will use to create/position the expression lines, as well as animate them in the next step.

$distance: 1.125em; // 18px
$height: 1.25em; // 20px
$offset: 1.25em; // 20px

.expression-lines {
  height: $height;
  width: 1px;
  background-color: white;

  bottom: $distance;
  position: absolute;

  transform-origin: center $height   $distance;

  $lines: 12;
  @for $i from 1 through $lines {
    &:nth-of-type(#{$i}) {
      transform: rotate(360deg / $lines * $i);

We are giving each expression line a height of 20px in ems for scalability; so go ahead, give the body a 200% font-size and watch this puppy scale! They are also being given a width of 1px, a white color and some positioning with bottom, position, and transform-origin.

The wonderful loop you see is simply spreading them out evenly around the checkbox. So we get this effect:

screenshot of checked checkbox with 12 expression lines arrayed around it

Onward to the next step: Animate the expression lines.

Step 3: Animating the Expression Lines

Now to animate the expression on checkbox:checked! We need the expression lines to start off invisible, so we are going to add to the above styles opacity: 0;. We shall then animate not only the opacity, but the position of the expression lines to get the spring effect.

$offset: 1.25em; // 20px
input[type="checkbox"]:checked ~ .expression-lines {
  animation: spring 0.6s ease forwards;

@keyframes spring {
  0% {
    opacity: 1;
    height: 0px;
    transform-origin: center $distance;
  100% {
    height: $height;
    bottom: $distance   $offset;
    transform-origin: center $height   $distance   $offset;

The spring keyframe is used for our expression lines, to grow them and move them at the same time. The transform-origin is very important, we need this to change when we are animating the expression lines away from the checkbox, so that they evenly “grow” out.

gif of animating checkbox

See the Pen
Creating a checkbox like it’s 2020
by Alyssa Nicoll (@alyssamichelle)
on CodePen.

Step 4: Testing Accessibility with ChromeVox

As I mentioned when we were building out our checkbox, display: none; is known to be pretty unfriendly to screen readers, as is visibility: none;. So let’s check how our checkbox is performing without using either of those two methods.

In this video I’m using ChromeVox as my screen reader. When I press tab, the checkbox is automatically selected. I can then toggle between checked and unchecked using the space bar!

To show you what happens (prepare for the jank) when we use one of the above lines of code, I added display: none; to the checkbox and tried again!

Now, when I try tabbing over to the checkbox, the checkbox label that we have styled disappears completely. I continue trying by selecting the invisible checkbox with my mouse (not very accessible on so many accounts) and then I try toggling between checked and unchecked using the space bar. As you can see, nothing happens. So simply put, TLDR, don’t use display: none; or visibility: none; when trying to animate/create fancy lovely-ness with your CSS.

Step 5: Getting Even Fancier with our CSS

Originally when I did this challenge, I used pseudo elements and only four divs to create all 12 expression lines:

Using Pseudo Elements for Label and Expression Lines (Less Markup)

This is just the way my brain first thought of doing this challenge. It is less markup but more complicated CSS, so I decided to simplify for teaching in this post. However, I want to show it off here, incase you are a CSS super nerd and interested in pulling off such a feat.

No Inner Ring of Death

Check it out here. This example not only includes the use of pseudo element fun, but it also removes the small inner ring of residue that was caused by me animating the inner box shadow to make the circle grow.


very faint inner ring on our checkbox circle


no inner ring on our checkbox circle

This example accomplishes this instead by using a pseudo element of the elements, building out a circle and growing/shrinking it on check instead.

See the Pen
Custom Accessible Checkbox — NO INNER RING OF DEATH
by Alyssa Nicoll (@alyssamichelle)
on CodePen.

I hope you’ve learned something from this checkbox fun! Remember to always keep accessibility in the back of your mind while creating. We can always do better. 🙂

Happy Coding!


Great content marketing not only drives new visitors to your site, but also entices previous visitors to come back. A content strategy is key, so take advantage of these tools that can help.

1. Identify areas with high potential

Before you sit down to create any new website content, understand the interests and content consumption behavior of the users you’re aiming to reach. Find areas that offer the most potential. For instance, niche content ideas—those that have large amounts of interest or queries but few matching sites—are often worth tapping into because they offer a way to make your content stand out. 

Google Trends shows what the world is searching for. It analyzes the popularity of top search queries in Google Search across regions and languages, and presents graphs to compare a query’s search volume over time. Keep up with trending searches, or enter any search term to discover related topics and queries. These techniques can give you a feel for what people are interested in so you can tailor your content to suit.

Google Question Hub collects unanswered questions directly from users to identify content gaps online. Use these insights to create better content for your audience.

Market Finder uncovers new opportunities to take your business global. It analyzes monthly searches, popular market trends, disposable income per capita and number of internet users by country to help you prioritize where your products, services and content will appeal most. Pinpoint a shortlist of promising new target markets for your brand and extend your site’s content to international users. 

News Consumer Insights provides analysis about which traffic sources drive the most qualified users and how to better engage your visitors. For instance, when NCI shows that the number of visits per user is high but pages per visit is low, it’s a signal that your internal recirculation could be better. Put links to your best performing content at the bottom of articles and invite readers to click.  

Don’t forget: All content must adhere to Webmaster Quality Guidelines.


Learn the basics of GLSL while creating a distorted mask effect on images using Babylon.js.


From our weekly sponsor: Design every part of your website with the brand new Divi Theme Builder. Try it for free.

Nowadays, it’s really hard to navigate the web and not run into some wonderful website that has some stunning effects that seem like black magic.

Well, many times that “black magic” is in fact WebGL, sometimes mixed with a bit of GLSL. You can find some really nice examples in this Awwwards roundup, but there are many more out there.

Recently, I stumbled upon the Waka Waka website, one of the latest works of Ben Mingo and Aristide Benoist, and the first thing I noticed was the hover effect on the images.

It was obvious that it’s WebGL, but my question was: “How did Aristide do that?”

Since I love to deconstruct WebGL stuff, I tried to replicate it, and in the end I’ve made it.

In this tutorial I’ll explain how to create an effect really similar to the one in the Waka Waka website using Microsoft’s BabylonJS library and some GLSL.

This is what we’ll do.

The setup

The first thing we have to do is create our scene; it will be very basic and will contain only a plane to which we’ll apply a custom ShaderMaterial.

I won’t cover how to setup a scene in BabylonJS, for that you can check its comprehensive documentation.

Here’s the code that you can copy and paste:

import { Engine } from "@babylonjs/core/Engines/engine";
import { Scene } from "@babylonjs/core/scene";
import { Vector3 } from "@babylonjs/core/Maths/math";
import { ArcRotateCamera } from "@babylonjs/core/Cameras/arcRotateCamera";
import { ShaderMaterial } from "@babylonjs/core/Materials/shaderMaterial";
import { Effect } from "@babylonjs/core/Materials/effect";
import { PlaneBuilder } from "@babylonjs/core/Meshes/Builders/planeBuilder";

class App {
  constructor() {
    this.canvas = null;
    this.engine = null;
    this.scene = null;

  init() {

  setup() {
    this.canvas = document.querySelector("#app");
    this.engine = new Engine(this.canvas, true, null, true);
    this.scene = new Scene(this.engine);

    // Adding the vertex and fragment shaders to the Babylon's ShaderStore
    Effect.ShadersStore["customVertexShader"] = require("./shader/vertex.glsl");
    ] = require("./shader/fragment.glsl");

    // Creating the shader material using the `custom` shaders we added to the ShaderStore
    const planeMaterial = new ShaderMaterial("PlaneMaterial", this.scene, {
      vertex: "custom",
      fragment: "custom",
      attributes: ["position", "normal", "uv"],
      uniforms: ["worldViewProjection"]
    planeMaterial.backFaceCulling = false;

    // Creating a basic plane and adding the shader material to it
    const plane = new PlaneBuilder.CreatePlane(
      { width: 1, height: 9 / 16 },
    plane.scaling = new Vector3(7, 7, 1);
    plane.material = planeMaterial;

    // Camera
    const camera = new ArcRotateCamera(
      -Math.PI / 2,
      Math.PI / 2,

    this.engine.runRenderLoop(() => this.scene.render());

  addListeners() {
    window.addEventListener("resize", () => this.engine.resize());

const app = new App();

As you can see, it’s not that different from other WebGL libraries like Three.js: it sets up a scene, a camera, and it starts the render loop (otherwise you wouldn’t see anything).

The material of the plane is a ShaderMaterial for which we’ll have to create its respective shader files.

// /src/shader/vertex.glsl

precision highp float;

// Attributes
attribute vec3 position;
attribute vec3 normal;
attribute vec2 uv;

// Uniforms
uniform mat4 worldViewProjection;

// Varyings
varying vec2 vUV;

void main(void) {
    gl_Position = worldViewProjection * vec4(position, 1.0);
    vUV = uv;
// /src/shader/fragment.glsl

precision highp float;

// Varyings
varying vec2 vUV;

void main() {
  vec3 color = vec3(vUV.x, vUV.y, 0.0);
  gl_FragColor = vec4(color, 1.0);

You can forget about the vertex shader since for the purpose of this tutorial we’ll work only on the fragment shader.

Here you can see it live:

Good, we’ve already written 80% of the JavaScript code we need for the purpose of this tutorial.

The logic

GLSL is cool, it allows you to create stunning effects that would be impossible to do with HTML, CSS and JS alone. It’s a completely different world, and if you’ve always done “web” stuff you’ll get confused at the beginning, because when working with GLSL you have to think in a completely different way to achieve any effect.

The logic behind the effect we want to achieve is pretty simple: we have two overlapping images, and the image that overlaps the other one has a mask applied to it.

Simple, but it doesn’t work like SVG masks for instance.

Adjusting the fragment shader

Before going any further we need to tweak the fragment shader a little bit.

As for now, it looks like this:

// /src/shader/fragment.glsl

precision highp float;

// Varyings
varying vec2 vUV;

void main() {
  vec3 color = vec3(vUV.x, vUV.y, 0.0);
  gl_FragColor = vec4(color, 1.0);

Here, we’re telling the shader to assign each pixel a color whose channels are determined by the value of the x coordinate for the Red channel and the y coordinate for the Green channel.

But we need to have the origin at the center of the plane, not the bottom-left corner. In order to do so we have to refactor the declaration of uv this way:

// /src/shader/fragment.glsl

precision highp float;

// Varyings
varying vec2 vUV;

void main() {
  vec2 uv = vUV - 0.5;
  vec3 color = vec3(uv.x, uv.y, 0.0);
  gl_FragColor = vec4(color, 1.0);

This simple change will result into the following:

This is becase we moved the origin from the bottom left corner to the center of the plane, so uv‘s values go from -0.5 to 0.5. Since you cannot assign negative values to RGB channels, the Red and Green channels fallback to 0.0 on the whole bottom left area.

Creating the mask

First, let’s change the color of the plane to complete black:

// /src/shader/fragment.glsl

precision highp float;

// Varyings
varying vec2 vUV;

void main() {
  vec2 uv = vUV - 0.5;
  vec3 color = vec3(0.0);
  gl_FragColor = vec4(color, 1.0);

Now let’s add a rectangle that we will use as the mask for the foreground image.

Add this code outside the main() function:

vec3 Rectangle(in vec2 size, in vec2 st, in vec2 p, in vec3 c) {
  float top = step(1. - (p.y   size.y), 1. - st.y);
  float right = step(1. - (p.x   size.x), 1. - st.x);
  float bottom = step(p.y, st.y);
  float left = step(p.x, st.x);
  return top * right * bottom * left * c;

(How to create shapes is beyond of the scope of this tutorial. For that, I suggest you to read this chapter of “The Book of Shaders”)

The Rectangle() function does exactly what its name says: it creates a rectangle based on the parameters we pass to it.

Then, we redeclare the color using that Rectangle() function:

vec2 maskSize = vec2(0.3, 0.3);

// Note that we're subtracting HALF of the width and height to position the rectangle at the center of the scene
vec2 maskPosition = vec2(-0.15, -0.15);
vec3 maskColor =  vec3(1.0);

color = Rectangle(maskSize, uv, maskPosition, maskColor);

Awesome! We now have our black plane with a beautiful white rectangle at the center.

But, wait! That’s not supposed to be a rectangle; we set its size to be 0.3 on both the width and the height!

That’s because of the ratio of our plane, but it can be easily fixed in two simple steps.

First, add this snippet to the JS file:

this.scene.registerBeforeRender(() => {
  plane.material.setFloat("uPlaneRatio", plane.scaling.x / plane.scaling.y);

And then, edit the shader by adding this line at the top of the file:

uniform float uPlaneRatio;

…and this line too, right below the line that sets the uv variable

uv.x *= uPlaneRatio;

Short explanation

In the JS file, we’re sending a uPlaneRatio uniform (one of the GLSL data type) to the fragment shader, whose value is the ratio between the plane width and height.

We made the fragment shader wait for that uniform by declaring it at the top of the file, then the shader uses it to adjust the uv.x value.

Here you can see the final result: a black plane with a white square at the center; nothing too fancy (yet), but it works!

Adding the foreground image

Displaying an image in GLSL is pretty simple. First, edit the JS code and add the following lines:

// Import the `Texture` module from BabylonJS at the top of the file
import { Texture } from '@babylonjs/core/Materials/Textures/texture'
// Add this After initializing both the plane mesh and its material
const frontTexture = new Texture('src/images/lantern.jpg')
plane.material.setTexture("u_frontTexture", frontTexture)

This way, we’re passing the foreground image to the fragment shader as a Texture element.

Now, add the following lines to the fragment shader:

// Put this at the beginninng of the file, outside of the `main()` function
uniform sampler2D u_frontTexture;
// Put this at the bottom of the `main()` function, right above `gl_FragColor = ...`
vec3 frontImage = texture2D(u_frontTexture, uv * 0.5   0.5).rgb;

A bit of explaining:

We told BabylonJS to pass the texture to the shader as a sampler2D with the setTexture() method, and then, we made the shader know that we will pass that sampler2D whose name is u_frontTexture.

Finally, we created a new variable of type vec3 named frontImage that contains the RGB values of our texture.

By default, a texture2D is a vec4 variable (it contains the r, g, b and a values), but we don’t need the alpha channel so we declare frontImage as a vec3 variable and explicitly get only the .rgb channels.

Please also note that we’ve modified the UVs of the texture by first multiplying it by 0.5 and then adding 0.5 to it. This is because at the beginning of the main() function I’ve remapped the coordinate system to -0.5 -> 0.5, and also because of the fact that we had to adjust the value of uv.x.

If you now add this to the GLSL code…

color = frontImage;

…you will see our image, rendered by a GLSL shader:


Always keep in mind that, for shaders, everything is a number (yes, even images), and that 0.0 means completely hidden while 1.0 stands for fully visible.

We can now use the mask we’ve just created to hide the parts of our image where the value of the mask equals 0.0.

With that in mind, it’s pretty easy to apply our mask. The only thing we have to do is multiply the color variable by the value of the mask:

// The mask should be a separate variable, not set as the `color` value
vec3 mask = Rectangle(maskSize, uv, maskPosition, maskColor);

// Some super magic trick
color = frontImage * mask;

Et voilà, we now have a fully functioning mask effect:

Let’s enhance it a bit by making the mask follow a circular path.

In order to do that we must go back to our JS file and add a couple of lines of code.

// Add this to the class constructor
this.time = 0
// This goes inside the `registerBeforeRender` callback
this.time  ;
plane.material.setFloat("u_time", this.time);

In the fragment shader, first declare the new uniform at the top of the file:

uniform float u_time;

Then, edit the declaration of maskPosition like this:

vec2 maskPosition = vec2(
  cos(u_time * 0.05) * 0.2 - 0.15,
  sin(u_time * 0.05) * 0.2 - 0.15

u_time is simply one of the uniforms that we pass to our shader from the WebGL program.

The only difference with the u_frontTexture uniform is that we increase its value on each render loop and pass its new value to the shader, so that it updates the mask’s position.

Here’s a live preview of the mask going in a circle:

Adding the background image

In order to add the background image we’ll do the exact opposite of what we did for the foreground image.

Let’s go one step at a time.

First, in the JS class, pass the shader the background image in the same way we did for the foreground image:

const backTexture = new Texture("src/images/lantern-bw.jpg");
plane.material.setTexture("u_backTexture", backTexture);

Then, tell the fragment shader that we’re passing it that u_backTexture and initialize another vec3 variable:

// This goes at the top of the file
uniform sampler2D backTexture;

// Add this after `vec3 frontImage = ...`
vec3 backgroundImage = texture2D(iChannel1, uv * 0.5   0.5).rgb;

When you do a quick test by replacing

color = frontImage * mask;


color = backImage * mask;

you’ll see the background image.

But for this one, we have to invert the mask to make it behave the opposite way.

Inverting a number is really easy, the formula is:

invertedNumber = 1 - 

So, let’s apply the inverted mask to the background image:

backImage *= (1.0 - mask);

Here, we’re applying the same mask we added to the foreground image, but since we inverted it, the effect is the opposite.

Putting it all together

At this point, we can refactor the declaration of the two images by directly applying their masks.

vec3 frontImage = texture2D(u_frontTexture, uv * 0.5   0.5).rgb * mask;
vec3 backImage = texture2D(u_backTexture, uv * 0.5   0.5).rgb * (1.0 - mask);

We can now display both images by adding backImage to frontImage:

color = backImage   frontImage;

That’s it, here’s a live example of the desired effect:

Distorting the mask

Cool uh? But it’s not over yet! Let’s tweak it a bit by distorting the mask.

To do so, we first have to create a new vec2 variable:

vec2 maskUV = vec2(
  uv.x   sin(u_time * 0.03) * sin(uv.y * 5.0) * 0.15,
  uv.y   cos(u_time * 0.03) * cos(uv.x * 10.0) * 0.15

Then, replace uv with maskUV in the mask declaration

vec3 mask = Rectangle(maskSize, maskUV, maskPosition, maskColor);

In maskUV, we’re using some math to add uv values based on the u_time uniform and the current uv.

Try tweaking those values by yourself to see different effects.

Distorting the foreground image

Let’s now distort the foreground image the same way we did for the mask, but with slightly different values.

Create a new vec2 variable to store the foreground image uvs:

vec2 frontImageUV = vec2(
  (uv.x   sin(u_time * 0.04) * sin(uv.y * 10.) * 0.03),
  (uv.y   sin(u_time * 0.03) * cos(uv.x * 15.) * 0.05)

Then, use that frontImageUV instead of the default uv when declaring frontImage:

vec3 frontImage = texture2D(u_frontTexture, frontImageUV * 0.5   0.5).rgb * mask;

Voilà! Now both the mask and the image have a distortion effect applied.

Again, try tweaking those numbers to see how the effect changes.

10 – Adding mouse control

What we’ve made so far is really cool, but we could make it even cooler by adding some mouse control like making it fade in/out when the mouse hovers/leaves the plane and making the mask follow the cursor.

Adding fade effects

In order to detect the mouseover/mouseleave events on a mesh and execute some code when those events occur we have to use BabylonJS’s actions.

Let’s start by importing some new modules:

import { ActionManager } from "@babylonjs/core/Actions/actionManager";
import { ExecuteCodeAction } from "@babylonjs/core/Actions/directActions";
import "@babylonjs/core/Culling/ray";

Then add this code after the creation of the plane:

this.plane.actionManager = new ActionManager(this.scene);

  new ExecuteCodeAction(ActionManager.OnPointerOverTrigger, () =>

  new ExecuteCodeAction(ActionManager.OnPointerOutTrigger, () =>

Here we’re telling the plane’s ActionManager to listen for the PointerOver and PointerOut events and execute the onPlaneHover() and onPlaneLeave() methods, which we’ll add right now:

onPlaneHover() {

onPlaneLeave() {

Some notes about the code above

Please note that I’ve used this.plane instead of just plane; that’s because we’ll have to access it from within the mousemove event’s callback later, so I’ve refactored the code a bit.

ActionManager allows us to listen to certain events on a target, in this case the plane.

ExecuteCodeAction is a BabylonJS action that we’ll use to execute some arbitrary code.

ActionManager.OnPointerOverTrigger and ActionManager.OnPointerOutTrigger are the two events that we’re listening to on the plane. They behave exactly like the mouseenter and mouseleave events for DOM elements.

To detect hover events in WebGL, we need to “cast a ray” from the position of the mouse to the mesh we’re checking; if that ray, at some point, intersects with the mesh, it means that the mouse is hovering it. This is why we’re importing the @babylonjs/core/Culling/ray module; BabylonJS will take care of the rest.

Now, if you test it by hovering and leaving the mesh, you’ll see that it logs hover and leave.

Now, let’s add the fade effect. For this, I’ll use the GSAP library, which is the de-facto library for complex and high-performant animations.

First, install it:

yarn add gsap

Then, import it in our class

import gsap from 'gsap

and add this line to the constructor

this.maskVisibility = { value: 0 };

Finally, add this line to the registerBeforeRender()‘s callback function

this.plane.material.setFloat( "u_maskVisibility", this.maskVisibility.value);

This way, we’re sending the shader the current value property of this.maskVisibility as a new uniform called u_maskVisibility.

Refactor the fragment shader this way:

// Add this at the top of the file, like any other uniforms
uniform float u_maskVisibility;

// When declaring `maskColor`, replace `1.0` with the `u_maskVisibility` uniform
vec3 maskColor = vec3(u_maskVisibility);

If you now check the result, you’ll see that the foreground image is not visible anymore; what happened?

Do you remember when I wrote that “for shaders, everything is a number”? That’s the reason! The u_maskVisibility uniform equals 0.0, which means that the mask is invisible.

We can fix it in few lines of code. Open the JS code and refactor the onPlaneHover() and onPlaneLeave() methods this way:

onPlaneHover() {
  gsap.to(this.maskVisibility, {
    duration: 0.5,
    value: 1

onPlaneLeave() {
  gsap.to(this.maskVisibility, {
    duration: 0.5,
    value: 0

Now, when you hover or leave the plane, you’ll see that the mask fades in and out!

(And yes, BabylonJS has it’s own animation engine, but I’m way more confident with GSAP, that’s why I opted for it.)

Make the mask follow the mouse cursor

First, add this line to the constructor

this.maskPosition = { x: 0, y: 0 };

and this to the addListeners() method:

window.addEventListener("mousemove", () => {
  const pickResult = this.scene.pick(

  if (pickResult.hit) {
    const x = pickResult.pickedPoint.x / this.plane.scaling.x;
    const y = pickResult.pickedPoint.y / this.plane.scaling.y;

    this.maskPosition = { x, y };

What the code above does is pretty simple: on every mousemove event it casts a ray with this.scene.pick() and updates the values of this.maskPosition if the ray is intersecting something.

(Since we have only a single mesh we can avoid checking what mesh is being hit by the ray.)

Again, on every render loop, we send the mask position to the shader, but this time as a vec2. First, import the Vector2 module together with Vector3

import { Vector2, Vector3 } from "@babylonjs/core/Maths/math";

Add this in the runRenderLoop callback function

  new Vector2(this.maskPosition.x, this.maskPosition.y)

Add the u_maskPosition uniform at the top of the fragment shader

uniform vec2 u_maskPosition;

Finally, refactor the maskPosition variable this way

vec3 maskPosition = vec2(
  u_maskPosition.x * uPlaneRatio - 0.15,
  u_maskPosition.y - 0.15

Side note; I’ve adjusted the x using the uPlaneRatio value because at the beginning of the main() function I did the same with the shader’s uvs

And here you can see the result of your hard work:


As you can see, doing these kind of things doesn’t involve too much code (~150 lines of JavaScript and ~50 lines of GLSL, including comments and empty lines); the hard part with WebGL is the fact that it’s complex by nature, and it’s a very vast subject, so vast that many times I don’t even know what to search on Google when I get stuck.

Also, you have to study a lot, way more than with “standard” website development. But in the end, it’s really fun to work with.

In this tutorial, I tried to explain the whole process (and the reasoning behind everything) step by step, just like I want someone to explain it to me; if you’ve reached this point of this tutorial, it means that I’ve reached my goal.

In any case, thanks!


The lantern image is by Vladimir Fetodov


While there are plenty of fantastic color palette generators available on the web, in this post, we thought we’d share our favorite color tools specific to UI design.

It’s important to keep in mind that choosing colors for user interfaces calls for a different set of requirements than for example, a graphic design project. Not only does UI design require a comprehensive set of colors with a range of variations and shades — but designers also need to think about how color will add to the user experience of a digital product. This means carefully considering color semantics and ensuring designs are accessible, all while remaining on-brand.

It’s no surprise that color is one of the most essential foundations for a digital product’s design language, so it’s crucial that you choose your color palette with intention. Below are a handful of UI color picking tools we recommend that will help ensure the effectiveness of your designs and of course, keep them looking nice and polished!

1. Accessible Color Matrix

When it comes to product design, we should all be keeping accessibility in mind. Ensuring your UI’s color contrasts are in line with the Web Content Accessibility Guidelines (WCAG) is one of the ways you can do this. Accessible Color Matrix makes it super easy to test out potential color schemes for your user interfaces. What makes the tool especially unique is the ability to test a range of colors rather than just two at a time. Check it out:


2. Eva Colors

This handy AI tool generates a semantic color palette based on your brand’s primary color — each color is assigned to a purpose you should apply it to: success, info, warning, and danger. On top of that, Eva Colors produces the various shades for each color generated and has a really easy export feature. You can even toggle to view the colors in both light and dark mode! Simple, effective, and intuitive.


3. Palx

Palx is an automatic UI color palette generator. Input your base color, and the system instantly provides you with a full-spectrum color palette based on the primary hex code entered. The colors generated work harmoniously together and you can also easily export all of them by scrolling to the bottom of the page.


4. Copy Palette

Created by Dribbbler Dimitris Raptis, Copy Palette enables you to create consistent monochromatic color combinations and export them directly into your favorite design software. According to Dimitris, the idea came to him after struggling repeatedly with generating the consistent monochromatic color palettes he envisioned for his interface designs. We love that Copy Palette also lets you can adjust parameters like the contrast ratio of shades and number of color variations.


5. Cloud Flare

Cloud Fare is a custom tooling that not only helps you build out color sets, but also preview palettes against a plethora of UI elements like headers, icons, buttons, and much more. The best part is that you can check each palette’s accessibility contrast scores and edit those colors as needed. It’s an insanely helpful two-in-one color palette and visualization tool to help you work more seamlessly with color. Check out their extensive instructions so you can make use out of all of their awesome features!


6. Palettte

Use Palette to create and sample color schemes in which colors seamlessly smooth into each other. You have full editing capabilities in terms of fine-tuning hue and saturation, and adding more color swatches as needed. Simply click on the plus icon at the top left corner of the tool to get started and when you’ve got a palette finalized, hit the export button at the top right! If you already have a color palette on hand, you can easily import and edit it further to get your desired values.


7. Open Color

If you prefer to simply pull a pre-made UI color palette that’s guaranteed to work well, check out Open Color. The website essentially provides an open-source color scheme that is optimized for user interfaces. If you don’t have a brand color set in stone, this is a sure-fire way to ensure your UI color palette is both effective and attractive. And, if you’re new to using color in UI design, check out their Instruction tab which includes a helpful manual specifying the intended use of each color!


For more web design resources, check out our roundup of accessibility tools to evaluate your design’s contrast ratio, 7 best illustration resources for web design projects, and learn how to avoid the top 5 mistakes new web designers make when starting out.

Find more Community stories on our blog Courtside.
Have a suggestion? Contact stories@dribbble.com.


Joey DiGangi

Animation provides a great medium through which an entrepreneur can easily showcase key concepts about his or her business. One of the most fascinating things, in my opinion, about animation is how the barriers to entry are slowly eroding. Think back to elementary and middle school art classes, where your teacher tried to explain the finer points by making it look and sound easy through examples, only to discover that it’s significantly harder to do when you actually put pencil to paper.

I’m starting to learn, however, that there’s more than one way to create animated content to support your startup— and some are more accessible than you’d think!

Rather than getting into which tools and methods work best, it makes more sense to start by looking at the building blocks needed to create an animation in the first place.

As a novice, I’m not the best person to break down the mechanics of what constitutes good animation. Fortunately, I have been staying in touch with Amanda Kopacz, a former classmate of mine from Juniata College, and current student at the Rochester Institute of Technology, where she studies Film and Animation with a concentration in 3D Animation.

She was happy to offer advice to someone that’s just getting started, talk about some of the more technical aspects of creating animations, and explain how beginners can learn the basics.

Some of Amanda’s work created on Animation Desk!

As a student pursuing a terminal degree in Fine Arts, Amanda’s learned to use a myriad of Hollywood-grade tools and functions to complete her work.

I’ll use programs like Photoshop, After Effects, and Autodesk Maya to create my animations,” she told me. “These tools have so many different functions that I feel like I’m always learning. It takes me anywhere from a few weeks to a few months to start to get the hang of them.

But Amanda got her start by sticking to the fundamentals of animation and working her way up to the more-advanced tools she’s using in her classwork today.

“I started out using basic techniques, like the classic pencil and paper, or creating stop motion animations with a cell phone camera,” she explained.

Stop motion is the term for animation created by taking a series of pictures of a physical object as it makes tiny, incremental movements to form an action sequence. Think about any famous claymation video — that’s stop motion.

Image Taken from Claymation Film, “Nightmare Before Halloween”

When using pencil and paper, one of the challenges of creating each frame is remembering where the previous one left off. For this, Amanda suggests that beginners use a technique called onion skin, where you lay new paper over completed drawings to use as a reference point.

“The trick requires you to put the paper down and have the light shine through so you can see your work clearly.”

Advances in technology and software are not only creating more possibilities for professional animators, but they’re also making it easier for beginners to use advanced techniques early on.

“I worked for a STEM camp this summer called MY TECH LEARNING,” she said. “I taught animation, and used FlipaClip to help the students learn.”

FlipaClip is a basic animation tool available on iOS and Android that makes it easy to start animating as a student thanks to its support of multiple-layer drawings, storyboarding, and straightforward interface.

I also have enjoyed using Animation Desk,” she mentioned. “It is super-helpful for someone starting to learn the basics of animation. There are different brushes to play around with and it’s easy to duplicate frames. It even supports onion skin, which allows you to see the frames in front of and behind the one you’re working on.”

Promoting Your Startup

My personal interest in animation came when I wanted to explore its usefulness at helping me convey some of the core functions of my startup’s app.

We released the AssureTech Mobile App just over a year ago to help people with severe food allergies — like myself — travel safely in foreign countries. The app offers a number of different translation and emergency tools that can sometimes be difficult to explain in a succinct way.

One colleague suggested that I look at something called whiteboard animation. Even if you don’t know the term, you’re probably familiar with the finished product: a video with terms and workflows drawn out in a logical procession. They’re all over social media and are one of the easiest ways to bring complex or abstract ideas “down to earth.”

The artist uses a white board to do an animation,” said Amanda. “So you can start with an image, take a picture, and then add, subtract, or change it in some way; take another picture; and continue to do that to illustrate a point. Once you have all the pictures, you put them together to create the animation.

I’m still getting the hang of this, but I think it’s a good — and frankly, quite fun — way to design marketing material for my startup!

This was my first-ever attempt at whiteboard animation. I used Animation Desk to complete the work.

For Aspiring Artists

Amanda and other aspiring animators or creative entrepreneurs — even if you’re just starting out — can use their skillsets in a number of different ways to build a professional portfolio and create opportunities to earn additional income.

I’ve enjoyed working as a freelance artist,” explains Amanda. When she’s not teaching animation or attending class, she’s creating opportunities for herself by offering her photography, video — and, of course — animation skills to businesses looking for support.

I started working as a freelance animator for Kdan Mobile this past spring,” she said. “That’s when I was first introduced to Animation Desk.

Amanda was hired to create promotional animations for the company’s 8th Annual iAniMagic Contest. The Contest is held for animators — from first-time learners to professionals — and offers cash prizes, professional recognition, and more to artists with the top submissions.

Whether you’ve got a startup that could benefit from quick-and-easy content, or a creative looking to add to to your portfolio, I encourage you to give animation a try!


9th Oct 2019

When you create a custom focus style, you want to think about four things:

  1. Adding an outline
  2. Creating animations that contain movement
  3. Changing the background color
  4. Changing the text color

I wrote more about this in my article on designing focus. During my research, I found three kinds of focus style I liked.

  1. The one on Smashing Magazine
  2. The one on WTF Forms
  3. The one on Slack
Focus styles on Smashing Mag, WTF Forms and Slack

Today, I want to show you how to create these focus styles and use them effortlessly across your website.

Creating the focus for Smashing Magazine

Smashing Magazine uses a large dotted outline for focus. To create this focus style, you set the outline property to 3px dotted.

Focus styles on Smashing Magazine.
*:focus {
  outline: 3px dotted #761b15;

See the Pen
Focus style Smashing Mag (default)
by Zell Liew (@zellwk)
on CodePen.

If you want to change the color of the outline, you can change the outline-color property.

.red-background *:focus {
  outline-color: white;

See the Pen
Focus style Smashing Mag (changing outline colors)
by Zell Liew (@zellwk)
on CodePen.

Alternatively, you can use CSS Variables.

:root {
  --outline-color: #761b15;

*:focus {
  outline: 3px dotted var(--outline-color);

.red-background {
  --outline-color: white;

See the Pen
Focus style Smashing Mag (with CSS Variables)
by Zell Liew (@zellwk)
on CodePen.

Creating focus styles for WTF Forms

The focus style for WTF forms contains two parts:

  1. A white border
  2. A blue border
Focus styles for WTF Forms.

This style can be created with box-shadow. The idea is you create two shadows:

  1. The first shadow with the background’s color
  2. The second shadow with the focus’s color
*:focus {
  outline: none;
  box-shadow: 0 0 0 .075rem #fff, 
              0 0 0 .2rem #0069d4;  

If you want to change the focus color, you need to rewrite the entire box-shadow.

.blue-background *:focus {
  outline: none;
  box-shadow: 0 0 0 .075rem #0069d4, 
              0 0 0 .2rem #fff;  

Note: WTF Forms does not have styles for links and buttons. Only form elements. It doesn’t have styles for a darker background either. I created this demo according to what I thought looks okay.

See the Pen
WTF Forms focus style
by Zell Liew (@zellwk)
on CodePen.

There’s an easier way. If you used CSS variables, you only need to switch the colors.

:root {
  --focus-inner-color: #fff;
  --focus-outer-color: #0069d4;

*:focus {
  outline: none;
  box-shadow: 0 0 0 .075rem var(--focus-inner-color), 
              0 0 0 .2rem var(--focus-outer-color);

.blue-background {
  --focus-inner-color: #0069d4;
  --focus-outer-color: #fff;

See the Pen
WTF Forms focus style (with CSS Variables)
by Zell Liew (@zellwk)
on CodePen.

Creating focus styles for Slack

The focus style on Slack contains two parts:

  1. A dark blue outline
  2. A light-blue border
Focus styles on Slack.

This focus style can be created with the same technique as WTF Forms.

*:focus {
  outline: none;
  box-shadow: 0 0 0 2px hsla(210, 52%, 42%, 1.00), 
              0 0 0 .6rem hsla(200, 72%, 83%, 0.75);

The CSS Variables trick works wonders if you need to change colors.

:root {
  --focus-inner-color: hsla(210, 52%, 42%, 1.00);
  --focus-outer-color: hsla(200, 72%, 83%, 0.75);

*:focus {
  outline: none;
  box-shadow: 0 0 0 2px var(--focus-inner-color), 
              0 0 0 .6rem var(--focus-outer-color);

.dark {
  --focus-inner-color: hsla(0, 0%, 100%, 0.75);
  --focus-outer-color: hsla(0, 0%, 100%, 0.25);

See the Pen
Slack Forms focus style (with CSS Variables)
by Zell Liew (@zellwk)
on CodePen.

If you use this technique on elements with borders, you might want to remove the borders. It’s not pretty to see two stacking borders.

button:focus {
  border-color: transparent;

See the Pen
Slack Forms focus style (improved border)
by Zell Liew (@zellwk)
on CodePen.

Combined demo

I combined the different methods onto one demo for you to play with. Here it is:

See the Pen
Focus style
by Zell Liew (@zellwk)
on CodePen.

Thanks for reading. Did this article help you out? If it did, I hope you consider sharing it. You might help someone else out. Thanks so much!


A configurator for creating unique fullscreen image animations with WebGL distortion effects powered by Three.js.


From our monthly sponsor: Take WordPress to a whole new level with Divi’s incredibly advanced visual builder technology. Try it for free.

In one of our previous tutorials we showed you how to create thumbnail to fullscreen WebGL distortion animations. Today we would like to invite you to build your own personalized effects by using the configurator we’ve created.

We’ll briefly go over some main concepts so you can make full use of the configurator. If you’d like to understand the main idea behind the work, and why the animations behave the way they do in more depth, we highly recommend you to read the main tutorial Creating Grid-to-Fullscreen Animations with Three.js.

Basics of the configurator

The configurator allows you to modify all the details of the effect, making it possible to create unique animations. Even though you don’t have to be a programmer to create your own effect, understanding the options available will give you more insight into what you can achieve with it.

To see your personalized effect in action, either click on the image or drag the Progress bar. The Duration option sets the time of the whole animation.

Under Easings you can control the “rate of change” of your animation. For example:

  • Power1.easeOut: Start really fast but end slowly
  • Power1.easeInOut: Start and end slowly, but go really fast in the middle of the animation
  • Bounce: Bounce around like a basketball

The simplest easings to play around with are Power0-4 with ease-out. If you would like to know the difference between each easing, check out this ease visualizer.

Note that the configurator automatically saves your progress for later use. Feel free to close the page and come back to it later.

Timing, Activation and Transformation

Timing, Activation and Transformation are concepts that come from our previous tutorial. Each on of them has their own list of types, that also have their own set of options for you to explore.

You can explore them by changing the types, and expanding the respective options tab. When you swap one type for another, your previous set of options is saved in case you want to go back to it.



The timing function maps the activation into actual progress for each vertex. Without timing, the activation doesn’t get applied and all the vertices move at the same rate. Set timing type to none to see it in action.

  • SameEnd: The vertices have different start times, but they all end at the same time. Or vice versa.
  • sections: Move by sections, wait for the previous section to finish before starting.

The same activation with a different timing will result in a very different result.


The activation determines how the plane is going to move to full screen:

  • side: From left to right.
  • corners: From top-left to bottom-right
  • radial: From the position of the mouse
  • And others.

For a visual representation of the current activation, toggle debug activation and start the animation to see it in action.


Transform the plane into a different shape or position over the course of the animation:

  • Flip: Flip the plane on the X axis
  • simplex: Move the vertices with noise over the while transitioning
  • wavy: Make the plane wavy while transitioning
  • And more

Some effects, use seed for their inner workings. You can set the initial seed and determine when this seed is going to be randomized.

Note that although these three concepts allow for a large amount of possible effects, some options won’t work quite well together.

Sharing your effect

To share the effect you can simply copy and share the URL.

We would love to see what you come up with. Please share your effect in the comments or tag us on Twitter using @anemolito and @codrops.

Adding your effect to your site

Now that you made your custom effect, it is time to add it to your site. Let’s see how to do that, step by step.

First, download the code and copy some of the required files over:

  • THREEjs: js/three.min.js
  • TweenLite: js/TweenLite.min.js
  • ImagesLoaded: js/imagesloaded.pkgd.min.js
  • For preloading the images
  • The effect’s code: js/GridToFullscreenEffect.js
  • TweenLite’s CSSPlugin: js/CSSPlugin.min.js (optional)
  • TweenLite’s EasePack:js/EasePack.min.js (optional; if you use the extra easings)

Include these in your HTML file and make sure to add js/GridToFullscreenEffect.js last.

Now let’s add the HTML structure for the effect to work. We need two elements:

  • div#App: Where our canvas is going to be
  • div#itemsWrapper: Where our HTML images are going to be


Note: You can use any IDs or classes you want as long as you use them when instantiating the effect.

Inside #itemsWrapper we are going to have the HTML items for our effect.

Our HTML items inside #itemsWrapper can have almost any structure. The only requirement is that it has two image elements as the first two children of the item.

The first element is for the small-scale image and the second element is the large-scale image.

Aside from that, you can have any caption or description you may want to add at the bottom. Take a look at how we did ours in our previous post:

An image

Our Item Title

Our Item Description


You may add as many items as you want. If you add enough items to make your container scrollable. Make sure to send your container in the options, so the effect can account for its scroll.

With our HTML items in place, let’s get the effect up and running.

We’ll instantiate GridToFullscreenEffect, add our custom options, and initialize it.

Our effect is now mounted and working. But clicking on an item makes the image disappear and we end up with a black square.

The effect doesn’t take care of loading the images. Instead, it requires you to give them to the effect whenever they load. This might seem a bit inconvenient, but it allows you to load your images the way it’s most suitable for your application.

You could preload all the images upfront, or you could only load the images that are on screen, and load the other ones when needed. It’s up to how you want to do that.

We decided to preload all the images using imagesLoaded like this:

imagesLoaded(document.querySelectorAll("img"), instance => {

    // Make Images sets for creating the textures.
    let images = [];
    for (var i = 0, imageSet = {}; i 

With that last piece of code, our effect is running and it shows the correct images. If you are having troubles with adding it to your site, let us know!

Our Creations

While working on this configurator, we managed to create some interesting results of our own. Here are three examples. You can use the parameters and attach it to the URL or use the settings:

Preset 1

See Here



Preset 2

See Here



Preset 3

See Here



Check out all the demos to explore more presets!

We hope you enjoy the configurator and find it useful for creating some unique animations!