Location data provider Factual is launching a new product called Data Enrichment, which supplements first-party data with additional audience insights built on mobile-location and real-world behaviors. The company says this allows companies to a gain deeper understanding of their own customers than first-party data would enable by itself.

CPG brands, automotive, media publishers. I spoke about the new offering with Scott Townsend, Factual’s Head of Data Enrichment and Factual CMO Brian Czarny. Townsend outlined several concrete use cases for the product for CPG brands, automotive marketers and media publishers, among others.

Townsend explained that CPG brands often struggle to get good first-party data from the retailers that sell their products. Factual’s product can show these brands where their customers are shopping. “This helps with personalized messaging,” said Townsend. “The campaign can say, ‘the product is on sale at Costco,’ where the consumer is a frequent shopper.”  

Automotive brands can better understand other car makers they’re up against and engage in-market shoppers visiting competitor lots. And media publishers, according to Townsend, “Often struggle to get enough data to make their inventory competitive with the walled gardens. Enrichment provides additional insights for advertisers about publisher audiences,” so they can charge more for their inventory.

Growing appetite for location data among brands. Factual’s Czarny observed, “Brands want to have the complete picture of their customers,” and added that the company is seeing “incredible appetite for location data to build behavioral segments.”

Indeed, location data has been described as a “cookie for the real world.” That idea becomes more significant and not just a marketing concept as browsers shut down third party cookies and privacy regulations make many types of data less available.

I asked Czarny and Townsend, as more brands seek out location data for insights and segmentation capabilities, will CCPA or mobile OS location alerts affect the availability or accuracy of that data? Czarny predicted that iOS 13 would likely have a bigger impact than CCPA in that regard.

Indeed, the Wall Street Journal previously reported, “Since iOS 13 was released in September, tens of millions of people have moved to block apps’ ability to track their locations when not in use.”

But Czarny said that Factual’s access to data had not been significantly impacted. “Near term iOS 13’s impact hasn’t be great for us because we responsibly source the data.” He predicted that wouldn’t be the case for publishers and mobile apps that ask for location without delivering a clear benefit to the user.

Why we care. As privacy forces marketers and brands to develop or better utilize their first party data, data enhancement offerings like Factual’s will become more important. The audience insights gained from location data is an obvious and powerful data-enhancement tool. It remains to be seen, however, how deeply the location intelligence industry is affected by CCPA, OS location alerts and increasing consumer privacy concerns surrounding location.

About The Author

Greg Sterling is a Contributing Editor to Search Engine Land, a member of the programming team for SMX events and the VP, Market Insights at Uberall.


All the signs are that 2020 is going to be a turbulent, challenging year for marketers — digital marketers in particular. That’s the cold reality. But like a New Year’s Day polar bear plunge, a dose of cold reality can be quite bracing and help provide sharpness and clarity. In that spirit, here are three predictions to help you clear your head for the new year.

1. The Cold Reality: Digital will become the new “too big to fail”

I’ve been warning for years that digital would trigger unwanted government oversight if it didn’t get its act together on privacy. Still, I don’t think the government will allow digital to collapse entirely, for the same reason that the government kept the financial sector afloat after the 2008 meltdown. There are simply too many jobs at stake.

Almost all of the revenue at Google and Facebook comes from digital ad dollars. Those companies, along with the remainder of the top five in digital ad revenue — Amazon, Microsoft and Verizon — all wield enormous clout. They will continue to not just survive, but also thrive.

The Hot Take: Most consumers have reasonable expectations

The truth is that most consumers expect a certain amount of their personal information to be shared, and most don’t mind — when it’s done within reason. So I’ll leave you with a dash of optimism. Although the process will be painful, I think eventually regulators will strike a balance between providing reasonable consumer protection and maintaining the status quo in digital.

2. The Cold Reality: TV will never regain its dominance

Another institution once considered “too big to fail” might not be. Just look what’s happening in television. As digital ad spend continued to soar, TV ad spend declined by 2.2% in 2019. And while eMarketer notes that “The presidential election next year will propel TV ad spending back into positive growth, before falling again in the following years,” I’d say even that “positive growth” should come with an asterisk (see my next prediction).

The Hot Take: Even in its diminished state, TV can remain a major player

As TV holds steady in 2020 (and probably 2021), it should gird for a pivotal moment in 2022. That’s when NFL broadcast rights come up for renewal.

To date, the one area where TV has been able to hold its own against digital is in live programming, sports in particular. But if one of the major streaming services makes a concerted bid to carry NFL games — look out. 

I have no doubt that TV execs are well aware of the stakes. With two years to prepare, expect them to hold onto those NFL rights at all costs.

3. The cold reality: Digital ads will provide the biggest bang for the political buck

For another measure of TV’s declining influence, let’s take a quick backward glance. In its postmortem on the 2016 election, Fortune reported that Hillary Clinton “placed a far greater emphasis than Donald Trump on television advertising, a more traditional way of reaching swaths of voters. She spent $72 million on TV ads and about $16 million on internet ads in the final weeks.”

Back in April of 2016, I wrote in this space that “candidates have discovered the quickest way to make news is to put out a statement or comment in a social media post.” I noted that Trump, in particular, had mastered the art of using social media as a bully pulpit to generate millions of dollars’ worth of media coverage — for free.

Since then much has been made of the influence that Facebook ads played in the outcome of the 2016 election. Well, Facebook could conceivably play an even bigger role in 2020 — depending on how long they continue to resist efforts to fact-check their political ads.

It’s all about precision targeting — and digital continues to rule on that front. Recently the Trump reelection campaign launched 338 new Facebook ads in one day, most aimed at people 56 and older. Look for those microtargeted Facebook ads in swing states to potentially play a huge role in the 2020 election.

The Hot Take: The volatility surrounding online political ads could be TV’s salvation

No, television can’t touch digital’s precision targeting. The problem for digital is that all that precision targeting, without much accountability, has made digital itself a potential target — of lawmakers spooked by that unchecked influence. We could still see blowback in the form of legislation before Election Day.

In the meantime, it’s not as if TV will lose its influence entirely. When you recall the 2016 election, most of the pivotal moments occurred on TV during debates and other live events. As in sports, live political coverage will remain TV’s trump card. (Sorry — I had to say it.)

No news is good news

I know I’ve painted a pretty gloomy picture here. But that’s the nature of news — you hear a lot more about things that go wrong than the things that go right.

Here’s the flipside: You didn’t hear much about the vast majority of brands and their marketing partners in 2019 because they continue to work quietly to ensure profitability while practicing corporate responsibility and respect for the consumer. And the good news is, I expect that trend to quietly continue throughout 2020.

Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.

About The Author

Lewis Gersh is founder and CEO of PebblePost, guiding corporate strategy and company vision with over 20 years of board and executive management experience.

Prior to PebblePost, Lewis founded Metamorphic Ventures, one of the first seed-stage funds, and built one of the largest portfolios of companies specializing in data-driven marketing and payments/transaction processing. Portfolio companies include leading innovators such as FetchBack, Chango, Tapad, Sailthru, Movable Ink, Mass Relevance, iSocket, Nearbuy Systems, Thinknear, IndustryBrains, Madison Logic, Bombora, Tranvia, Transactis and more.

Lewis received a B.A. from San Diego State University and a J.D. and Masters in Intellectual Property from UNH School of Law. Lewis is an accomplished endurance athlete having competed in many Ironman triathlons, ultra-marathons and parenting.


The emergence of AI-powered martech has given marketers a lot to grapple with the last couple of years. Some have voiced fears of being replaced by automation. Others are excited by the potential time and cost savings.

But with all the hype around new technology, we may have overlooked the most exciting part of the AI revolution.

As traditional marketing shifted into digital marketing over the last decade, we’ve drifted further and further away from the human-to-human interactions that defined customer experience in prior decades.

We’ve turned people into prospects. Views into impressions. We’ve commoditized customers by automating our funnels and flywheels. By scaling our digital activities, we’ve lost a bit of the human touch.

It takes a ton of time and lots of tools to manage a digital marketing campaign. We have to leverage automation to drive results. But the machinery we build can feel cold and impersonal and end up damaging the customer experience.

AI promises to streamline our efforts, automating the repetitive tasks we grind through today, performing massively complex calculations so we don’t have to.

By replacing the technical grind of digital marketing, AI will free marketers to focus on the part of the job that requires the most humanity – engaging and serving customers.

Conversational marketing, data-driven personalization, AI and other trends predict a future where marketers have the time, and the mandate, to focus on real human interactions again. What a time to be a marketer!

Soapbox is a special feature for marketers in our community to share their observations and opinions about our industry. You can submit your own here.

Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.

About The Author

Nathan Binford is the vice president of marketing at MarketChorus, producers of AI-powered solutions for content marketer and publishers. Find more from Nathan on the MarketChorus blog and his personal blog, Inbound Marketing Best Practices.


Puppeteer is a Node library which provides a high-level API to control Chrome or Chromium over the DevTools Protocol. Puppeteer runs headless by default, but can be configured to run full (non-headless) Chrome or Chromium.

In this tutorial, we’ll learn what testing is, the different types of testing, and then we’ll use Puppeteer to perform end-to-end testing on our application. By the end of this tutorial, you should be able to end-to-end test your apps easily with Puppeteer.


For this tutorial, you need a basic knowledge of JavaScript, ES6 and Node.js.

You must also have installed the latest version of Node.js.

We’ll be using yarn throughout this tutorial. If you don’t have yarn already installed, install it from here.

You should also know the basics of Puppeteer. To understand the basics of Puppeteer, check out this simple tutorial.

To make sure we’re on the same page, these are the versions used in this tutorial:

  • Node 13.3.0
  • npm 6.13.2
  • yarn 1.21.1
  • puppeteer 2.0.0
  • create-react-app 3.3.0

Introduction to Testing

In simple terms, testing is a process to evaluate the application works as expected. It helps in catching bugs before your application gets deployed.

There are four different types of testing:

  1. Static Testing: uses a static type system like TypeScript, ReasonML, Flow or a linter like ESLint. This helps in capturing basic errors like typos and syntax.
  2. Unit Testing: the smallest part of an application, also known as a unit, is tested.
  3. Integration Testing: multiple related units are tested together to see if the application works perfectly in combination.
  4. End-to-end Testing: the entire application is tested from start to finish, just like a regular user would, to see if it behaves as expected.

The testing trophy by Kent C Dodds is a great visualization of the different types of testing:

Testing Trophy - Kent C Dodds

The testing trophy should be read bottom-to-top. If you perform these four levels of testing, you can be confident enough with the code you ship.

Now let’s perform end-to-end testing with Puppeteer.

End-to-end Testing with Puppeteer

Let’s bootstrap a new React project with create-react-app, also known as CRA. Go ahead and type the following in the terminal:

$ npx create-react-app e2e-puppeteer

This will bootstrap a new React project in a e2e-puppeteer folder. Thanks to the latest create-react-app version, this will also install testing-library by default so we can test our applications easily.

Go inside the e2e-puppeteer directory and start the server by typing the following in the terminal:

$ cd e2e-puppeteer
$ yarn start

It should look like this:

React Init

Our App.js looks like this:

import React from 'react';
import logo from './logo.svg';
import './App.css';

function App() {
  return (

Edit src/App.js and save to reload.

Learn React
); } export default App;

We’ll be testing the App.js function and the code will be written in App.test.js. So go ahead and open up App.test.js. It should have the following content:

import React from 'react';
import { render } from '@testing-library/react'; // 1
import App from './App';

test('renders learn react link', () => { // 2
  const { getByText } = render(); // 3
  const linkElement = getByText(/learn react/i); // 4
  expect(linkElement).toBeInTheDocument(); // 5

Here’s what’s happening in the code above:

  1. We import the render function from the @testing-library/react package.
  2. We then use the global test function from Jest, which is our test runner installed by default through CRA. The first parameter is a string which describes our test, and the second parameter is a function where we write the code we want to test.
  3. Next up, we render the App component and destructure a method called getByText, which searches for all elements that have a text node with textContent.
  4. Then, we call the getByText function with the text we want to check. In this case, we check for learn react with the case insensitive flag.
  5. Finally, we make the assertion with the expect function to check if the text exists in the DOM.

This comes by default when we bootstrap with CRA. Go ahead and open up another terminal and type the following:

$ yarn test

When it shows a prompt, type a to run all the tests. You should now see this:

React Init Test

Now let’s test this application with end-to-end testing.

Testing the Boilerplate with Puppeteer

Go ahead and install puppeteer as a dev dependency by typing the following in the terminal:

$ yarn add -D puppeteer

Now open up App.test.js and paste the following:

import puppeteer from "puppeteer"; // 1

let browser;
let page;

// 2
beforeAll(async () => {
  browser = await puppeteer.launch({
    headless: false
  page = await browser.newPage();
  await page.goto("http://localhost:3000/");

// 3
test("renders learn react link", async () => {
  await page.waitForSelector(".App");

  const header = await page.$eval(".App-header>p", e => e.innerHTML);
  expect(header).toBe(`Edit src/App.js and save to reload.`);

  const link = await page.$eval(".App-header>a", e => {
    return {
      innerHTML: e.innerHTML,
      href: e.href
  expect(link.innerHTML).toBe(`Learn React`);

// 4
afterAll(() => {

This is what we’re doing in the code above:

  1. Firstly, we import the puppeteer package and declare some global variables, browser and page.
  2. Then we have the beforeAll function provided by Jest. This runs before all tests are run. Here, we launch a new Chromium browser by calling puppeteer.launch(), while setting headless mode to false so we see what’s happening. Then, we create a new page by calling browser.newPage() and then go to our React application’s URL http://localhost:3000/ by calling the page.goto() function.
  3. Next up, we wait for the .App selector to load. When it loads, we get the innerHTML of .App-header>p selector by using the page.$eval() method and compare it with Edit src/App.js and save to reload.. We do the same thing with the .App-header>a selector. We get back innerHTML and href and then we compare them with Learn React and https://reactjs.org/ respectively to test our assertion with Jest’s expect() function.
  4. Finally, we call the afterAll function provided by Jest. This runs after all tests are run. Here, we close the browser.

This test should automatically run and give you the following result:

E2E Test Puppeteer Basic

Let’s go ahead and make a counter app.

Converting the Boilerplate to a Counter App

Firstly, edit some CSS by changing App.css to the following:

.header {
  font-size: 56px;
  text-align: center;

.counter-app {
  display: flex;
  justify-content: space-around;

button {
  background-color: navajowhite;
  font-size: 32px;

.count {
  font-size: 48px;

Now change App.js to the following:

import React, { useState } from "react";
import "./App.css";

function App() {
  const [count, setCount] = useState(0);
  return (


); } export default App;

Here, we’re making a simple counter application with two buttons, Increment and Decrement. By pressing the Increment button, the counter gets increased by 1, and by pressing Decrement button, the counter gets decreased by 1. It looks like this:

React Counter

Testing the Counter App with Puppeteer

Now change the App.test.js to the following:

import puppeteer from "puppeteer";

let browser;
let page;

beforeAll(async () => {
  browser = await puppeteer.launch({
    headless: false
  page = await browser.newPage();
  await page.goto("http://localhost:3000/");

// 1
test("renders counter", async () => {
  await page.waitForSelector(".header");

  const header = await page.$eval(".header", e => e.innerHTML);

// 2
test("sets initial state to zero", async () => {
  await page.waitForSelector(".counter-app");

  const count = await page.$eval(".count", e => e.innerHTML);

// 3
test("increments counter by 1", async () => {
  await page.waitForSelector(".counter-app");

  await page.click(".increment");
  const count = await page.$eval(".count", e => e.innerHTML);

// 4
test("decrements counter by 1", async () => {
  await page.waitForSelector(".counter-app");

  await page.click(".decrement");
  const count = await page.$eval(".count", e => e.innerHTML);

afterAll(() => {

Here, we keep the beforeAll and afterAll function the same, as before, where we initialize a browser and go to http://localhost:3000/ in beforeAll and we close the browser in afterAll. Then, we do the following:

  1. We check if the text Counter is rendered. For that, we wait for the .header selector to load. Then we use page.$eval() to get the innerHTML of .header selector. And then we finally make the assertion to check if Counter is rendered.
  2. Next, we check if the initial state is zero. We wait for the .counter-app selector to load. Then we get the innerHTML from the .count selector. We finally compare if the count is 0. Notice that we’re using a string while our state is a number. This is because innerHTML always returns a string.
  3. Here, we check if clicking the button increments the state by 1. First, we wait for the .counter-app selector to load. We then click on the .increment button. This should increase the state from 0 to 1. We then get the innerHTML from the .count selector. Then we compare it to 1, as our increment function should always increase state by 1.
  4. The decrement button should decrease the state by 1. It works the same way as the increment button. First, we wait for the .counter-app selector to load. We then click on the .decrement button. This should decrease the state from 1 to 0. Notice that the state was 1 after we clicked the increment button. We then get the innerHTML from the .count selector. Then we compare it to 0, as our decrement function should always decrease state by 1.

The result should now look like this:

E2E Test Puppeteer Counter


In this tutorial, we learned about different types of testing — static testing, unit testing, integration testing and end-to-end testing. We then performed end-to-end testing on our boilerplate, bootstrapped with the help of create-react-app.

Later, we converted the app to a counter application. And finally we performed end-to-end testing on the counter application.

The Puppeteer library useful not only for performing end-to-end testing but also for doing different kinds of browser automation. Puppeteer is backed by Google and is actively maintained, so be sure to check its docs to understand the wide-ranging use cases it offers.

You can find the code for this tutorial on GitHub.

Akshay is a creator, computer artist and micropreneur from Mumbai.


Today, Google shared an updated timeline for when Chrome apps will stop working on all platforms. June 2022 is when they’ll be gone for good, but it depends on which platform you’re on (via 9to5Google). Previously, we knew that Chrome apps someday wouldn’t work on Windows, macOS, and Linux, but today, Google revealed that Chrome apps will eventually stop working on Chrome OS, too.

A Chrome app is a web-based app that you can install in Chrome that looks and functions kind of like an app you’d launch from your desktop. Take this one for the read-it-later app Pocket, for example — when you install it, it opens in a separate window that makes it seem as if Pocket is functioning as its own app.

You probably don’t need to worry about the death of Chrome apps messing up your browsing experience too much. At this point, most apps on the web are just regular web apps, which is why you’ll be able to keep using Pocket without issue in much the same way by navigating to https://getpocket.com/. In rarer cases, you might also be using Progressive Web Apps, which are basically websites that are cached to your device so they can have some offline functionality and be launched like an app. Some Chrome apps you have installed may already redirect to websites, like many of Google’s apps. And Chrome extensions are also different from Chrome apps, and those will keep working just fine.

There’s a pretty decent chance you’re not using any real Chrome apps at all, even if you use web apps all the time. When Google first announced all the way back in 2016 that it would end support for Chrome apps on Windows, macOS, and Linux, it said approximately one percent of users on those platforms were actively using packaged Chrome apps. That was nearly four years ago, and web developers have moved on.

If you do use Chrome apps, they will stop working much sooner on Windows, macOS, or Linux than they will on Chrome OS. Here’s Google’s timeline:

March 2020: Chrome Web Store will stop accepting new Chrome Apps. Developers will be able to update existing Chrome Apps through June 2022.

June 2020: End support for Chrome Apps on Windows, Mac, and Linux. Customers who have Chrome Enterprise and Chrome Education Upgrade will have access to a policy to extend support through December 2020.

December 2020: End support for Chrome Apps on Windows, Mac, and Linux.

June 2021: End support for NaCl, PNaCl, and PPAPI APIs.

June 2021: End support for Chrome Apps on Chrome OS. Customers who have Chrome Enterprise and Chrome Education Upgrade will have access to a policy to extend support through June 2022.

June 2022: End support for Chrome Apps on Chrome OS for all customers.

To break that down a bit:

  • At some point in June 2020, Chrome apps will stop working on Windows, macOS, and Linux, unless you have Chrome Enterprise or Chrome Education Upgrade, which lets you use Chrome apps for six more months.
  • If you’re on Chrome OS, Chrome apps will work until June 2021. Again, if you have Chrome Enterprise or Chrome Education Upgrade, Google says you can use Chrome apps for an additional year.

Originally, Chrome apps were supposed to stop working on Windows, macOS, and Linux in early 2018, but in December 2017, when Google removed the Chrome apps section from the Chrome Web Store, it pushed that early 2018 deadline to an unspecified date in the future. Now, more than three years later, we finally know when Chrome apps won’t work on those platforms — and when they won’t work on any platform at all.


With Polypane, we want to give you better insights into your site and make the entire developer/designer workflow faster. With Polypane 2.1, we’ve made some huge improvements for both of those goals.

What’s new?

Quick list of the major new features:

  • Live CSS Edit all panes at the same time
  • Social media previews See what your page looks like when shared on Facebook, Slack, Twitter and LinkedIn.
  • Meta info Get a full overview of all your meta tags
  • Handoff / browse Use Avocode, Zeplin and more directly in Polypane
  • Workspaces UI Quickly switch between your favorite pane sets

Beyond that, we also added network throttling, new and improved overlays, better indicators, ways to detect when your site is shown in Polypane, speed improvements, and many more smaller features.

Live CSS Panel

Write CSS and SCSS that is applied to all panes at the same time. With the Live CSS panel, quickly trying out new styling for your site on multiple sizes is super easy and very satifying.

The CSS editor is fully featured, knows all CSS declarations and will suggest the appropriate ones as you type, so writing CSS is super fast.

Live CSS comes with an element selector. This will let you click on any element in any pane, go through the CSS and find the actual selectors for that element and give them as suggestions in the editor. Selecting elements to write CSS for is super quick and doesn’t require you to open devtools to find them.

Lastly, all of the Google fonts are available when you write Live CSS, so trying out new fonts is as easy as saying font-family: Aladin. Polypane will automatically load in the fonts for you.

More on the Live CSS panel

Meta information panel

The meta information panel shows you all the information that’s defined in your . Your title, description and favicon, but also all your meta tags, viewport declaration, language and other information. This makes it super easy to spot missing info or typos.

Meta info for Spiritapp.io

The meta information panel also gives you previews of the social cards of Twitter, Facebook, Slack and Linkedin, as well as the Google search result. For Twitter and Slack, we also support their dark mode.

In developing this feature, we found out that none of the official social card previews of Twitter, Facebook and Linkedin accurately showed what your card was going to look like. They’re all out of date!

Additionally, despite their documentation claiming otherwise, all sites use whatever meta information is available. So we painstakingly reverse engineered the way the social cards were actually rendered and we replicate that with pixel-perfect accuracy.

Our generated social previews: Twitter, Facebook, Linkedin & Slack (clockwise.)

Social previews for Doka.js. Clockwise: Twitter, Facebook, Linkedin, Slack

More on the Meta information panel

Handoff/browse panel

An important aspect of modern development is handoff: tools that take a design and then display the CSS and dimensions of elements and let you export images, so you don’t have to click around in a design tool to figure all that stuff out.

AvocodeFigmaInvisionMarvelZeplinAdobe XD

With the handoff panel, you can use all these handoff tools directly inside of Polypane. Your design spec and the site are always side-by-side.

We support a number of handoff tools natively, like Avocode, Invision and Zeplin, but you can also fill in a custom URL.

A custom URL you say? Does that mean you put a browser in a browser?

Well… Yeah!

You can also use the Custom URL option to keep open any reference that you’re working with, like MDN, CanIUse, the React documentation or any API documentation.

Custom URL showing React documentation

More on the Handoff/browse panel

Workspaces panel

Workspaces were introduced in Polypane 1.1 as a way to save sets of panes using shortcuts or the window menu. Now they have a visual interface.

The workspaces panel contains a visual overview of all 9 workspaces (with a preview) and lets you easily save and restore them.

New in the workspaces panel is that you can name your workspace, so you no longer have to remember if, for example, Your android devices were in workspace 1 or 2.

Where you can find all these new features: the side panel

With the side panel, Polypane gains a new place to add functionality and show site information that is not so easily surfaced in other browsers, but still super important.

You can dock the side panel either on the right or on the bottom so you can make it fit your favorite screen configuration.

Side panel

New and updated overlays

In Polypane 2 we introduced overlays: simulators and debuggers you could overlay on a pane. These simulators let you quickly check accessibility issues and simulate things like color blindness, or viewing your site in bright sunlight. In Polypane 2.1, we’ve added more overlays and improved existing ones.

New and improved overlays

There is a new Z-index overlay (top left) that highlights which elements have a defined z-index. It’s based on the plugin that Addy Osmani wrote for Visbug, with UI improvements that we’re contributing back to that project as well. We’ve improved readablity and also show the z-index stack for each element that has one.

We have two new visual impairment simulators: Glaucoma (top center) and Cataracts (top right). Both of these eye conditions blur and dull your vision.

The color contrast checker (bottom left) now works better when backgrounds are defined for ancestors, and no longer needs a reload.

Bright sunlight (bottom center) now has less of a blowout and an additional glare to simulate the reflections that the glass on a device creates.

We have a new Night mode (bottom right) that simulates the way your page looks when Night mode is active, where screens dial down the blue tones and brightness.

Network throttling

Throttling selector

In Isolate panes mode you can now throttle your network connection alongside emulating devices, to test how a page behaves in more realistic settings. We currently have 4 settings: Online, Mid tier mobile, low-end mobile and offline.

More on Network throttling

Detecting Polypane

If you’re developing your site, you might want to show additional debug information or test different variants of your page in different panes. Starting in Polypane 2.1 we offer two ways to detect your site running in Polypane, through our User agent and through a __polypane property on the window.

Read about how to detect Polypane

We switched out the native tooltips for all buttons for our custom ones, so they show up quicker and look better, making it easier to get started with Polypane.


All indicators active

Reference image, Overlays, Emulation and Devtools all have a blue dot when they’re active.

For Emulation, we show a yellow dot when the network is throttled, and if you have an error in your console, the Devtools icon will have a red dot. This will tell you at a glance if there is an issue you need to look at, without needing to open the devtools. Emulation and Devtools are available in Isolate Pane mode.

Screenshotting improvements

Each release we further tweak and improve the screenshots we create to get a better result. Polypane is already much better than Chrome, Firefox and nearly all online tools (check out Screenshot comparison page to find out how) and we’re continuing to make the screenshots better for dynamic websites. Scripts now load better and animations are handled more consistently.

Performance improvements

We’ve rewritten parts of Polypane to make them faster and more performant. The synchronised scrolling is now an order of magnitude faster, and most of the messaging and handling of content is done asynchronously, making the UI more responsive.

Full changelog:

There’s more new features, improvements and fixes in this release so read through the full changelog below. All new features are fully documented in our docs too.

  • New Side panel
  • New Site meta information panel
  • New Live CSS panel
  • New Handoff/browse panel
  • New Workspaces panel
  • New New overlays: Z-index, Glaucoma, Cataracts and Night mode
  • New Network throttling
  • New Live reload delay option
  • New Detect Polypane from your site
  • Improvement Use custom tooltips for all buttons
  • Improvement Faster and more performant scroll syncing
  • Improvement Color contrast overlay has improved calculations
  • Improvement Color contrast overlay no longer needs a reload
  • Improvement Bright sunlight overlay now simulates glare
  • Improvement Updated emulated user agents
  • Improvement Better full page screenshotting support
  • Improvement Icons in the pane header now show activity dots
  • Improvement Pane devtools show red dot when there are errors
  • Improvement Clicking pane devtools icon will refocus it if already open
  • Improvement View mode and Filter buttons simplified
  • Improvement All popovers will stay inside window
  • Improvement Close all panes option added to the menu
  • Improvement Warning when opening more than 15 panes
  • Improvement Added the f6 shortcut to focus address bar
  • Improvement Tweak UI icons
  • Improvement Lower bound of generated breakpoints is now 320px
  • Fix Full mode no longer overflows screen
  • Fix Speed and responsiveness improvements
  • Fix Zoom-to-fit for panes works again
  • Fix Support multiple levels of imports for breakpoint detection
  • Fix No longer blocking scripts when making screenshots
  • Fix 404 page in full mode no longer overlays icons
  • Fix Prevent syncing of file inputs

Getting Polypane 2.1

Polypane will automatically update on Mac and Windows. Linux users need to download the new version from
the download page and if you’re on Mac and Windows but don’t want to wait on the update popup, you can find
your download there as well.

If you don’t have Polypane yet there is a free 14 day trial available. Get it here.


Although SaaS product trend is growing exponentially, there is one big problem every SaaS businesses have: “Customer Retention Rate”. It is a metric that demonstrates whether your marketing and customer care efforts are wasting your time and money or boost your business. Here is the situation that “Product Adoption” steps in to offer an effective way to improve your business’s retention.

Let’s start with a short definition of product adoption, then continue with some exclusive clues that will help you increase it.

Sounds good? Let’s go ?

What is product adoption?

Product adoption, by definition, is a process by which customers hear about a new product or a service and become recurring users of it. It is a crucial aspect of customer health and plays a primary role in customer success.

Increasing SaaS product adoption encourages your customers to detect new items and elements. And also, your customers can discover new features of an existing product. Plus, it enables them to become long-term users. For the most successful companies, higher adoption is indispensable for higher revenue.

Especially, SaaS start-ups are highly familiar with the term of product adoption. Because they continue to struggle with low retention rates, users not coming back after signing up and always looking for a solution to keep their users for a long time to increase lifetime value.

You know, it’s the fundamental of a SaaS business model. You have to sell your SaaS product every month to your customers. Product adoption process provides a more advanced customer success by increasing the average lifetime value and the conversion rate of the trial to the subscribed user and free to paying user.

Amazing, like a swiss knife for product teams, isn’t it?

Let’s dive into how to measure product adoption first, and then to how to increase it.

How to measure product adoption?

saas adoption rates

There is an obvious fact that most software people do not actually have adequate knowledge and understanding of adoption.

Apart from classical visitor to user, the user to customer rates, there’s a whole different area to measure new feature adoption.

Let’s think about a scenario which you are really familiar with:

You worked for weeks over a new cool feature and finally launched it to all of your users. How many of them could actually reach it? Did they really start to use it? How actively did they use it? Did you actually do a good job working on that new feature, instead of something else? How to measure the success of this new feature or a product in general?

You need to attentively detect the areas that users drop off and exit. If drop-off and exiting rates are high, it is an obvious indicator of something going wrong and an urgent call for fixing it.

Don’t know what feature adoption is? Check out our article What is Feature Adoption and How to Increase It.

3 Product Adoption Metrics

There are 3 metrics to calculate product adoption rates, therefore help you measure the success of a new product.

Adoption Rate:

It is the percentage of a number of new customers to the number of total customers. There is a simple mathematical equation to answer the question of how to calculate adoption rate. A number of new user / Total Number of Users x 100. For example, you have 22 new users and the number of total users is 200: Your adoption rate is 22/200 x 100 = % 11. It can be calculated in a daily, weekly, monthly or yearly basis.

Time-to-first key action:

The average time it takes a new customer to use an existing feature, or an existing customer to use a new feature for the first time

Percentage of users who performed the core action for the first time:

Name of this metric clearly reveals its definition. It is the percentage of customers have performed a core feature for the first time in a given period of time.

“Using a tool as a backup can be helpful.”

To monitor and measure saas adoption rates, you can employ some tools which are able to review the new user onboarding funnel to analyze the steps in which users are having trouble with.

Analytics tools such as Mixpanel, Amplitude, Woopra etc are great tools to measure product adoption with customizable funnels and lots of helpful resources.

How to increase product adoption?

We need to attract customers who tend to actively and consistently use our products or services. No matter what business model we have, we can only achieve success when we make users experience unprecedented moments which make them say “aha, this is what I’m looking for”. It suddenly takes our product or service to a core ingredient of customers’ work.

To clarify it, I want to define the Product Adoption Process by 5 stages.

5 stages of the New Product Adoption Process:

Product adoption process

Every user respectively goes through these stages no matter what kind of product it is.

To increase your SaaS product adoption;

  • Follow the stages in the new product adoption process,
  • Detect insufficient points in each step carefully,
  • Enhance them immediately.

1 – Awareness (Introduction Stage): 

In the first stage of the new product adoption process, potential customers enter your website to know about a product but they don’t have sufficient knowledge about it yet.

Teaching Customers can be helpful: Prospects may not be aware of the existence or importance of a certain problem. On the other hand, customers may realize the problem but don’t know the solution. Educating customers about either the problem or the solution can help provide a strong awareness.

An important step is making a product more recognizable and making customers be aware of it. Bringing new and differentiated features, low price, sales, proposed quality into the forefront with a smooth onboarding process can be very helpful in this stage.

 2 – Interest (Information-gathering Stage):                                       

It is the stage that customers get attracted to the product and try to have more information about it.

Follow the steps of your customers instantaneously and make sure you have strong customer support. Sending segmented emails will increase product adoption at this stage as well.

3 – Evaluation (Consideration Stage):

At this stage, customers determine whether a product is worth to try or not.

Help your prospects evaluate your product objectively. Make them see the aspects that differentiate from alternatives to it.

4 – Trial (Sampling Stage):

Users try your product to see how efficient the product is for compensating customers’ need. It can be either the first purchase or free trial period.

Give free trials and a money-back guarantee to ensure your product is worth employing.

5- Adoption / Rejection (Buy or not Buy Stage):

Prospects determine if your SaaS product has the value and decide to adopt it or not. In the last stage of the new product adoption process, customers proceed from a cognitive state (being aware and informed) to the emotional state (liking and preference) and finally to the behavioral or conative state (deciding and purchasing).

An Example Case of the New Product Adoption Process from Real Life:

Let’s assume that you are walking through a street near your home:

  1. You saw a billboard that says a new pizza restaurant Alican Pizza has opened which located near your home. (Awareness)
  2. When you went home, you looked for some information on the internet to know more about Alican Pizza’s menu and prices. (Interest)
  3. You considered either want to try it or not. (Evaluation)
  4. You decided to try the pizza in small scale – one slice or a little size for trial – to improve or estimate its value. (Trial)
  5. You conclude that it is delicious and you want to be a long term customer of Alican Pizza. (Adoption)

Diffusion of Innovations Theory: The Product Adoption Curve

Have you ever noticed that some people adopt new products or behaviors sooner than others? In 1962 Everet Rogers a professor of rural psychology developed a theory called diffusion of innovations to explain the product adoption curve.

Rogers found that individuals within any society fall into one of five different adopter groups based on how early or how late they adopt an innovation. While explaining the product adoption curve, Rogers’ theory tells us that if you want to promote the widespread adoption of a new product, you need to market each adopter group differently using distinct communication channels and messages.

The Innovators (2.5%)

Innovators are a small but very important group because they are always the first learn about and adopt an innovation.

The Early Adopters (13.5%)

The early adopters are also a small forward-thinking group and are often highly respected as opinion leaders.

The Early Majority (34%) 

The early majority takes time to make decisions. They will observe others’ experiences and will only adopt a product once they are convinced it has real benefits and that it is the new status quo.

The Late Majority (34%)

The late majority is more resistant to change but they are very responsive to peer pressure. They want innovations to be very well tested.

Laggards (16%)

Laggards are highly unwilling to change and they also can be hard to reach with marketing campaigns. Because they often have very minimal exposure to media.

2 Ways to Improve the Product Adoption Process

1 – Make Your Support More Supportive

Customers are having trouble figuring out how exactly your product works. They have limited time and there are a lot of alternatives in the market so they do not want to spend much time on understanding your product. It creates a huge obstacle for customers to retain.

Customer support has the power that can make customers proceed to the next step with your product. Offering in-app live chat, embed videos and creating interactive guides are great solutions to improve saas adoption rates.

2 – Improve Your Onboarding

Effective onboarding helps customers how to successfully use the product without any external effort. You can show the proposed value of your product through a successful onboarding and it also helps users to find their “Aha!” moments easily.

A “Product Adoption Software” can be a very effective solution

You don’t want your developers to work for hours to create product adoption guides. It takes too much time and undoubtedly considerable effort is needed to do it. Save your developer team’s time and don’t waste your budget.

A product adoption software helps your users reach the product in websites and web apps with interactive guides that you created for them. It is the easiest and cost-effective way. You don’t need a big team and a high budget to make guides, your interns can even do it in a couple of minutes.

Moreover, product adoption software permits you to follow the product adoption process stages and provides analytical information which allows you to make objective evaluations handily.

For a longer answer to your question “Why shouldn’t I build onboarding walkthroughs insource?”, check out our article: Onboarding Walkthroughs Are Hard.

Frequently Asked Questions

What are the five stages of new product adoption curve?

The Innovators – The Early Adopters – The Early Majority – The Late Majority – Laggards. All stages are explained in our article.

What is the most efficient way to increase product adoption?

A product adoption software saves your developers’ time and your budget and permits your team to follow the product adoption process stages.

What metrics should I follow to measure the success of product adoption?

Adoption rate of the product, time-to-first key action, percentage of the users that has reached the “aha!” moment.


It wasn’t that long ago that Instagram was flooded with saturated filters and low-resolution photos. But then the gaudy, maximalist look of the 2000s faded out of style and was replaced with an interest in clean lines and mature color palettes. Seemingly overnight, the platform became an ode to minimalism—filled with interior design and lifestyle posts from influencers anchored by organic, nautilus-shaped forms and eggshell-colored walls. Everything on the grid was carefully curated to be monochromatic, uncluttered, and uniform.

[Cover Image: Tree Abraham/courtesy Bloomsbury]

Minimalism has been eagerly adopted as an aesthetic by Instagram users and pretty much everyone else not on the social media application, too. Marie Kondo teaches us that minimalism is getting rid of anything that does not spark joy. Other influencers (and brands) suggest that it’s having a hyper-curated closet of a few basics, or a simple skincare routine featuring only three all-natural products. Minimalism has become a visual manifestation of “wellness”—a lifestyle trend rooted in conspicuous consumption.

But this loose misinterpretation belies its roots as a decades-old architecture and design philosophy. In his new book, The Longing for Less: Living with Minimalism, out from Bloomsbury January 21, culture critic Kyle Chayka investigates how we’ve veered away from minimalism’s true origins, and converted it into—what can be reduced to—a “look.” Here, Chayka helps dispel the four biggest myths of minimalism.

Minimalism has deeper roots than you think

Minimalism’s recurrence as an idea, in both society and art, reveals the philosophy’s central paradox: It is a quiet celebration of space, but bold in the way its simplicity overwhelms. “In the time right after World War II, minimalism was a popular aesthetic because it’s a perfect, utopian style that everyone can access,” Chayka says in a phone interview. Soon after, in the 1970s, the idea of “simple living” began to take hold, which is the last time eco-conscious consumer practices (less consumption, more self-reliance) were as in vogue as they are today. “I think the internet and social media and the financial crisis is what really caused the super popularity of minimalism this time around,” Chayka says.

Minimalism is not just a trendy style

It’s not difficult to imagine why we, as a society, long for less. Our lives are dominated by dizzying screens, which have forced us to prioritize images over the humanness of real life. “So much of our visual experience is on the internet now. That’s the container of our experience,” Chayka says. “And so it makes sense that the spaces we occupy would be very simple because we spend so much time on our phones.”

In an attempt to counteract the harm technology has done to our ability to focus, rest, and enjoy experiences, people have adopted minimalism as a visual aesthetic. It’s blank, inoffensive, natural. It’s even been marketed as a form of self-help.

Donald Judd, 15 Untitled Works in Concrete, 1980-1984. [Photo: Flickr user Nan Palmero]

But according to Chayka, “minimalism is about experiencing the world directly and engaging with your surroundings.” Consider Agnes Martin’s austere canvases or Donald Judd’s spacious constructions in Marfa, Texas. In architecture, minimalism has roots in Japan, where “there’s a real interest in very refined textures and creating experiences with light and shadow—an architecture of ephemerality that modernism doesn’t really have,” Chayka says. In short, there was once a spirituality to minimalism that has been lost in its current expression. “The style now seems more like numbing yourself and creating a protective environment,” Chayka says.

Minimalism is not morally superior

Minimalism these days has an aura of moral superiority. “Minimalism has always been associated with moral purity or a sense of existing outside of society, whether that’s during the midcentury modern movement or the Voluntary Simplicity Movement of the ’70s,” Chayka says. “The problem with luxury minimalism today is that the style is associated with moral purity and outsiderness but it’s being adopted by the most insider people possible—wealthy women and tech billionaires. The style of minimalism [we see today] is a reality that’s not very minimal at all.” Clearing out one’s home for the sake of more space is not radical if there’s a financial safety net in place to buy it all back again, if one should so choose. (Steve Jobs’s uniform of black turtlenecks and jeans was not minimalist as much as it was a decision to not be burdened with variety.) So the suggestion that someone owning fewer objects is healthier and more put-together overlooks the fact that participating in the trend is less about the inward journey than it is about appearances. Nothing morally superior about that.

Eames House interior, 1952. [Photo: © Eames Office LLC/courtesy Bloomsbury]

Minimalism is not a commodity

Today’s Instagram-ready minimalism couldn’t have been born anywhere other than in the United States. “I think the commodification of minimalism has been very American,” Chayka says. “The idea of an entirely minimalist lifestyle is deeply American . . . we consume everything to excess, even minimalism.” Home organization entrepreneur Marie Kondo seems to have tapped into this American Achilles’ heel; her pivot to selling home goods reflects a genius awareness that consumers are eager to buy objects that represent an ideology, even though they are a shallow appropriation of it. This makes minimalism’s success on Instagram plain, too; it is now an element deeply embedded into a platform that has become synonymous with a certain brand of conspicuous consumption.

Inside a room at Yumiya Komachi in Kyoto. [Photo: Kyle Chayka/courtesy Bloomsbury]

What’s next

Sometime soon, the minimalism trend will likely slip out of the mainstream consciousness again, just as it has in the past. “I think we’ve hit peak minimalism and [are now moving] past it . . . minimalism is a trend and a style and it comes and goes in waves. We start obsessing over it and then find out that it doesn’t solve our problems,” Chayka says. For most people, minimalism is simply not a realistic lifestyle, because the very structure of our capitalist society relies on constant consumption and an attitude of overindulgence. To put it simply, minimalism—as it exists in the culture today—is a privilege. “It’s the difference between an Apple Store and a Zen temple,” Chayka says. “The Apple Store never changes—there’s perfectly clean glass and steel and empty space. But if you think of the rock garden in the Zen temple, it’s always changing and moving with time . . . it’s more interesting and sustainable than creating something that never changes.”

Buy The Longing for Less: Living with Minimalism, by Kyle Chayka, designed by Tree Abraham, Elizabeth Van Itallie, Mia Kwon, and Patti Ratchford for Bloomsbury on Amazon.


New Logo and Identity for GoDaddy done In-house



Established in 1997, GoDaddy is a domain registrar and web hosting company. A staggering 78 million domain names have been registered through them and they host over 19 million users. With 14 offices across the world, GoDaddy has evolved from that-place-with-the-weird-name-where-you-can-buy-domains to a well-rounded web service that will help users make something of those domains, providing templates to build websites and marketing tools to promote them. By now, GoDaddy would probably love it if the press didn’t mention their old ads as the company is trying very hard to completely shed the sex-sells approach but, no, we do not forget. This week, GoDaddy introduced a new logo and identity designed in-house with strategy by Lippincott, logo design by Koto, and leadership direction and implementation in-house.

The GO is a clear statement of advocacy for entrepreneurs everywhere — a symbol of empowerment that encourages them to stand on their own two feet.

The GO was created as a visual representation of the space where three ideas meet:

Entrepreneurial spirit

We created the GO’s swooping arcs to represent the indomitable spirit of everyday entrepreneurs. And the word “go” itself is our rallying cry for folks to take the first or next step in their entrepreneurial journey.


Joy is a corollary to the love that fuels entrepreneurs to make their own way. The GO’s heart shape is a nod to this feeling, while its bold lines radiate the same joy that entrepreneurs everywhere experience.


Entrepreneurship should be accessible to everyone, which is why we bring humanity into our digital tools for the benefit of all. The GO’s continuous, overlapping stroke symbolizes the connection all entrepreneurs share, and its generous interior space has room for folks of every stripe.

GoDaddy Design microsite

Logo introduction.
New Logo and Identity for GoDaddy done In-house
New Logo and Identity for GoDaddy done In-house
Icon detail.

Logo animation.

Logo trait animations: Entrepreneurial Spirit, Joy, and Humanity.

In 2018 GoDaddy began the process of sweeping its old “guy” icon under the rug by dropping it from the logo — visually, it was the last connecting thread to the GoDaddy brand of old that made it even harder to forget about what their brand stood for. After an interim period of no “guy”, the company has introduced a new icon, the “GO” and Imma say “STOP”. There are a number of things that are not necessarily wrong but just very awkward. Based on the animation the logo is meant to be a heart but in its static form it looks like two ovals stacked oddly one on top of the other with the not-so-hidden “G” in there adding a whole lot of confusion. It doesn’t read like a heart at all because there is no depth to the loops and the one line that could make that connection is abruptly ended by trying to make the “G” — that hard-angled line pointing down and left kills this logo. Comparisons to Airbnb’s “Bello” are fair not just for the similar approach in design but the douchiness of giving it a pretentious name. (As much as I liked and championed the Airbnb logo I always disliked that they named it so annoyingly.)

Conceptually and/or contextually there is something very off about the icon too: it looks completely out of place next to the word “GoDaddy” — it’s really impossible to not think “Who’s your daddy?” (and all that that entails) — and, while I get that they are trying to appeal to the entrepreneurial spirit, let’s not kid ourselves, they sell domain names and templates, not dreams. I’m all for companies establishing an emotional connection with its audience and that’s in part what branding is for but this feels very forced and inauthentic. But let’s continue with the design aspects… the wordmark has been slimmed down, which makes for slightly better readability but the squared-off counters have been lost, which is a detail I really liked in the old logo. To the credit of the icon design, the weird angled line is the same thickness and rounded-corner-ness as the “G” in the wordmark. The new blue color is a little annoying in its vibrancy and I’m surprised they moved away so drastically from their green color, which I thought was fairly recognizable. So, to summarize my logo feelings: not a fan.

Always bright and dynamic, our brand colors speak to the creativity of our customers. Our wide palette connects with people across the globe and promotes inclusivity for all cultures. We use color to bring joy to our brand.

GoDaddy Design microsite

Color palette.

Our bold, serif headline font is elegant and expressive projecting a fresh, modern voice. It presents a hint of flair for professionalism, giving the brand a distinguished feel. We use it to establish strong moments of brand for customers.

GoDaddy Design microsite

Headline typography.

Another big shift in the identity is the use of the bold, pointy, serif trend that has been widely adopted by editorial brands — Medium, The Guardian, BuzzFeed News, and others — and it also feels so out of place for GoDaddy, like a kid putting on their parents clothes. Visually, it’s far too unrelated to the icon or the wordmark and while it works graphically as a headline font — because, well, that’s what it is — it just feels like a gratuitous choice to add a quick dose of maturity.

Our photography lets people see themselves in our brand. Whether it’s capturing entrepreneurs in the moment or presenting them as heroes, we want their personality, independence and energy to shine through. When showcasing our products or anything else, our approach to photography is simple — keep it bright, bold and inspiring.

GoDaddy Design microsite

New Logo and Identity for GoDaddy done In-house

Photos are fine. Not exactly in a cohesive style or art direction but they have the right content.

Our hand-drawn illustrations add a touch of humanity to our brand. Some concepts are easier to convey through thoughtful illustrations than through image or word. We apply a light-hearted, editorial approach that intentionally compliments narratives across our experiences.

GoDaddy Design microsite

New Logo and Identity for GoDaddy done In-house
Hand-drawn illustration style.

These are pretty cool — a nice step up from the typical mono-thickness-line trend.

With thoughtful concepts and bold use of color, we use a bit of personality to embody the story of intangible products and complex ideas. We want to create an inspiring world that sparks the possibilities our customers can create.

GoDaddy Design microsite

New Logo and Identity for GoDaddy done In-house
3D illustration style.

3D illustration style, animated.

These are also very cool and fun, especially the animated ones. Unrelated to the other illustrations and photos but cool, sure. I could accept the rationalization that both illustration styles feature hands but, like the choice of bold pointy serif, both illustration styles seemed like they were picked to fill a quota of Things That Brands Do Today.

New Logo and Identity for GoDaddy done In-house
Business cards.
New Logo and Identity for GoDaddy done In-house

New Logo and Identity for GoDaddy done In-house

New Logo and Identity for GoDaddy done In-house

New Logo and Identity for GoDaddy done In-house
New Logo and Identity for GoDaddy done In-house
New Logo and Identity for GoDaddy done In-house
New Logo and Identity for GoDaddy done In-house
T-shirt and tote.

The applications are all fine and good in terms of execution. There is clearly a lot of care being put into the implementation and into building a visual language that can flex in different ways and styles. I don’t think it’s very cohesive or entirely convincing but it does get its messaging across vibrantly.

Brand video.

Overall, for me, there simply is too big of a disconnect between what the company offers and the overly emotional and philosophical positioning behind the new identity — it’s great that GoDaddy is convinced by it and trying to create this atmosphere but I’m not buying it. Maybe I’m alone in this and maybe I have some weird prejudice about GoDaddy not because I’m offended by their old ads — heck, they were fun at the time — but because their old brand, from the name to the logo to the website, had always been so extremely amateur that I can’t suddenly see them in this new heightened light. Nonetheless and I guess what matters in the end is that for any new customer going to buy a domain name at GoDaddy for the first time, it will all look like a respectable place to do so, which hasn’t always been the case.

See what else happened on Brand New each year since publication began in 2006

Logo Before & After
Sample Application

Spotted Around the web

Pinned Recent, Big Stories

Curated 3D that is 2L2Q


Microsoft Edge logo on a body of water

A little over a year ago, we announced our intention to rebuild Microsoft Edge on the Chromium open source project with the goals of delivering better compatibility for everyone, less fragmentation for web developers, and a partnership with the Chromium community to improve the Chromium engine itself. At Ignite, we unveiled our new vision for the web and search, our colorful new icon, and how Microsoft Edge Bing are the browser and search engine for business — and we are thrilled by the growing excitement we’ve heard from all of you who’ve tried it out and sent feedback!

From this incredible momentum, today I’m pleased to announce the new Microsoft Edge is now available to download on all supported versions of Windows and macOS in more than 90 languages. Microsoft Edge is also available on iOS and Android, providing a true cross-platform experience. The new Microsoft Edge provides world class performance with more privacy, more productivity and more value while you browse. Our new browser also comes with our Privacy Promise and we can’t wait for you to try new features like tracking prevention, which is on by default, and provides three levels of control while you browse.

Another innovative new feature in Microsoft Edge allows you to customize your online experience. Choose a new tab page layout or design, and select the types of news you want.

Microsoft Edge user interface

The last several months have been nothing short of inspiring for all of us working to deliver great new capabilities for Microsoft Edge including AAD support, Internet Explorer mode, 4K streaming, Dolby audio, inking in PDF, Microsoft Search in Bing integration, support for Chrome-based extensions, and more.

If you’re a business or education IT administrator looking to deploy widely in your organization or school, we have you covered as well – you can download offline packages and policies and learn more on the new commercial site.

Internet Explorer legacy mode animation

People have downloaded the preview channels of the new Microsoft Edge millions of times to their devices, and we’ve seen many organizations begin to pilot these channels for their users. Enterprises and schools who have mission critical legacy applications and websites – but also want modern web and security – have turned to our new Internet Explorer mode as a “best of both worlds” solution. And for Microsoft 365 customers, using Microsoft Search to find files, people, office floor plans and more on your organization’s intranet is as easy as typing in the Microsoft Edge address bar. Our early customers are calling it “a win.”

Moving to the new Microsoft Edge – what to expect

Now that we’ve reached this milestone, you might be wondering what to expect on your PC. To get the new Microsoft Edge you have two choices: you can either manually download it today, or if you are a general consumer user, you can wait for it to be automatically released to your device via Windows Update. When you do make the switch, your favorites, passwords, form fill information and basic settings will carry over to the new Microsoft Edge without you having to do anything. You can read more about our rollout plans here.

If you’re an IT administrator, you will need to download an offline deployment package to pilot within your corporate environment—the new Microsoft Edge will not automatically deploy for commercial customers. Additionally, none of the Microsoft Edge preview channels will update to the new Microsoft Edge, as they can be used side-by-side for testing and validation.

We also know that deploying a new browser isn’t just “flipping a switch,” so we want to make the process as easy as possible. In addition to simplifying deployment with tools like Intune and Configuration Manager, we are committed to helping your organization transition to the new Microsoft Edge. At Ignite we announced FastTrack and App Assure support for Microsoft Edge. FastTrack will help you deploy Microsoft Edge to your organization at no extra charge if you are a customer with an eligible subscription to Microsoft 365, Azure, or Dynamics 365. And if your sites are compatible on Internet Explorer 8 and above, Google Chrome, or legacy Microsoft Edge, then they’ll work on the new Microsoft Edge. If not, contact App Assure and we’ll help you fix it.

What’s next

Of course, the innovation, testing, and new features don’t stop coming today, and this initial release is only just the beginning. If you want a sneak peek of what’s coming, we encourage you to keep using our preview channels – Beta, Dev and Canary – which will remain available for download on the Microsoft Edge Insider site. Not only will you get an insider’s look at our features pipeline for Microsoft Edge, but you’ll continue to have the opportunity to help improve Microsoft Edge with your valuable feedback. Your input helps make both the new Microsoft Edge, and the web, better for everyone.

Thank you!

A huge thank you to our community of Microsoft Edge Insiders as well as the engineers within the Chromium community who have worked with us to develop the new Microsoft Edge. We remain committed to actively participating in and contributing to the Chromium open source project. To date we’ve made more than 1900 contributions across areas like accessibility, modern input including touch, speech, digital inking, and many more.

Keep telling us what’s working well, what needs to change and what you’d like to see in the new Microsoft Edge.

Our heartfelt thanks – we couldn’t have made it here without you!



Dedicated to the unsung studio designers, copywriters, producers, ADs, CDs, and everyone else who creates wonderful things.

Dedicated to those who stayed up late and got up early to get on the family iMac to recreate event slides in Keynote.

Thank you.


Nikhil Kirve

It was a small moment and a late one when I realised… Why do designers approach design in such a way? Why not like this, and so on… Being a designer for over 6 years, I’ve made these mistakes and probably you’d have made one too.

At times, designers just want to “please” the stakeholders and get their work done by the day. While on many occasions we invest a lot of time in searching or designing a solution, when we are needed to address the question in the first place — and this leads us to walk astray without realising our errors.

Hence, below are few of my observations towards some common design flaws that I stumbled across, that made me think about my understanding and my exercise that helped me accelerate my UI/UX design process. I hope this helps in your everyday design.

Creativity is not easy. Coming up with a design in a short time can be difficult, and so we look for references and copy them the same. As a designer you need to stop plagiarism and try to modify a reference work into your own design.

I agree — Designers are not born with innate ability to create gorgeous interfaces, or gifted with some special color psychology. Designers also need to work hard on their craft, experiment and learn like everyone else does. Although, as a newbie designer it is OK to copy and hone your skills from Pros. It’s the right way to learn when you do not have an industry perspective.

That said, having somewhat experience as a designer you will agree to one thing that, we all take UI reference material from Dribbble, Behance, Awwwards etc. In my opinion, our references should remain as an inspiration. It is one thing to get inspired but it’s another thing to copy paste someone else’s design entirely. Its always good to get ‘inspired’ (motivated) from others, like your favorite designer — you will always fall in love with their work, their style of design, color palette, interaction and more… but think this, their design language & style might not necessarily be appropriate to the product you are building.

Do not copy! Understand if it will work for your product or not.

Copy pasting UIs will save your time and a non-designer would never notice. But seriously: Why? — after years being a designer, if plagiarizing is still your way then, my friend, it’s time to bring some change. The creative process is an exercise, one in which you need to give time and train your mind. Once you overcome your creative block, you will come up with something different even with the shortest of ‘icon’.

It’s important you save time on illustrations, icons or any graphic element on UI while making early designs. Having an idea of what the illustration will look like and then selecting an appropriate placeholder will accelerate your work.

Off course, this doesn’t mean that you don’t have to make one. But, is it actually necessary to design visual elements before you finalize the layout? — NO.

Let’s take an example: You are designing an ‘Invite friend page’ for which you want to add an illustration — so you design the layout of the page accordingly, Now, rather than jumping into making that illustration or artwork.. take it slow. In your mind you know what kind of illustration you want — A guy holding a phone and a friend next to him.. Ok now, go ahead search for a similar type of illustration as a ‘placeholder’ on Dribbble and use it in your design.

Pick the reference image from Dribbble.

Get the design approved by the stakeholders prior. You will have plenty of time to work on illustration once the design is in development. This shows your ability to quickly move the project ahead without actually putting your creative skills on priority.

By doing so –

  • You have presented your page structure a lot faster
  • Saved your time and rework on graphical elements
  • Project timeline is not compromised due to delay in design

Final work on illustration before release!

If you are asked to provide a design or even a popup by your PM… Just wait for a sec. Do not pour their words into design just for the sake of giving it. Listen to the requirement and get every necessary piece of information, as this will only help you to strategize your process of design and save both of your time.

Ask questions: __What is the objective? __What are we expecting the user to understand? __Is this information necessary to the user? __Is this structure coherent for the user? __What will happen if there is no data to show? And so on…

You are not bound to do everything as the PM says.

As a designer you need to do your part of research & exploration for you to share insights for the best experience a user could possibly have. Do not rush into design blindly. You are not thinking deeply about the product and its use, period. If you do not agree to a certain thing as a user experience designer, take a minute… Get your brain to process and you will have a different & better way around.

(Look the requirement from your perspective).. by Agatha Yu

Sometimes, you won’t even require a design and a problem could be solved using system native components. Think before opening ‘Sketch’ — Stop doing donkey work!

As UX Designers we are often inclined towards the craft and expression of an idea. Whereas PMs helps in translating the user problems into tasks and are focused on execution of the product. However, in the end, both Designers & PMs set the vision and bring value to the product. Thus, give your own thought to what you are about to design in order to save your iterations.

While designing, we start thinking of elements that the user will need or will interact with — which ends up crowding our artboard with elements such as — heading, subheadings, graphic, bullet points, video, ticker, fab icon and what not… and all this happens when we don’t have the time to sit and wireframe. In such case, having all information that we think is necessary to the user in our design ends up confusing us even more, and all we end up doing is — moving the pieces up and down to make the layout look good. Don’t do this.

“A Designer will arrange details on the page, but a Good Designer will eliminate all the unnecessary details.”

It is crucial for us as designers to understand on a psychological level, why our users are doing what they are doing, what motivates them to use our products/service. This awareness will allow us to create an impactful and well-defined structure for the product.

Treat your design as a Story — which has a start, middle and end. Every small feature you deliver is a story in itself and each page you design weaves that story together. Do not overwhelm the user by tossing stuff on his face and letting him figure the story by himself. Instead, walk him through. Design should intrigue him enough that he is compelled to stick till the end.

Adding everything into design -vs- what’s actually needed

Be more empathetic rather than being instructed. There is a certain depth to understand a page when it comes to the real user. Once you have their point of view it will be easy for you to eliminate unnecessary and keep only what’s actually required on the page.

…Less is required, more is unnecessary!

Consistency is the key principle of design! Going all artistic will end up with an inconsistent interface to the product that will not only confuse the user but will also make it hard to decipher the final product.

Let’s say, One day you’ve been asked to redesign a ‘Profile page’.. So, you go and skim through all the profile page designs on Dribbble. You liked a design that is appeling with all the colored icons and gradients, which then you use that as a reference to make your own profile page. Kudos! You’ve completed your task! Now, the next day, you are asked to design a ‘Detail page’ inside the ‘Profile page’, which shall have numerous text fields, actions and content.. So you repeat — go on Dribbble, see similar samples and design your page. Why?…

First of all, All design decisions should come from understanding the user. And secoundly consistency in the design pattern should reflect in the product.

Trying to make designs beautiful as per Dribbble will not benefit to the user experience of your product.

Alteast, having a consistent Visual Language will help the user to execute a task without learning the UI every time they switch the context. By doing so, you are also setting a Voice and Tone for the product.

Save components that can be reused — Styleguide!

Visual consistency must be taken care off. Similar elements that are perceived the same way make up the visual consistency. Font sizes, spacing, button style, colors, even the line width of the icons should be consistent across the product. And so we create library/styleguide.

… Keep it simple, reuse components!

If you are one of those designers who think that content writing is not my job… I will write ‘lorem ipsum’ and move on and later incorporate the content I receive from the PM or content team then — Stop practicing this. Even if you feel your language is verbose, try to write your own copy.

Well, Content Strategy and UX writing falls under the large UX umbrella. Few industry best (Airbnb, Slack, Dropbox, Patreon, Squarespace) follows content strategy as a design practice. Likewise, not as a professional but being aware of what content adds to the core product experience and how it goes much beyond metrics and ROI is important.

“Writing content in your words will help you decide on how a design should work and look rather than trying to fit the content into a design”. As a designer, you have a fair understanding of the user journey and what the user is expecting to read or see on a particular screen, hence writing copy will helps you to keep the content flow consistent and further provides a context to UX writers to refine & create a unified voice for the product in ways that a designer may not.

Content lives in design and design communicates via content.

By adding lorem ipsum in your design you are dressing your king before knowing his size.

It’s a good practice to write content on your own during the design process, instead of using placeholder text. It will truly differentiate your design style from others.

Read about — Content First Approach in Design

I know how exciting it is to show your design skills. As a designer we want the best looking UI and interaction to what you are making — That like a dream of a designer:) You will prefer to sit in isolation until you are finished with your crazy designs and only then give the designs away. Creating beautiful pixels without knowing technical feasibility is just waste of time.

Ah, the beautiful relationship between Designers & Developers! We have heard so many times about the cold war brewing between these two world apart parties. Yes, we guys are equally involved & responsible for shipping the product. The whole process of creating a product or introducing a new feature always starts with keeping the user in focus, right? And, no matter how fancy a design we make, if we are not aligned with the engineering partners then it won’t do any good to the user and certainly will not help the business.

Love between designers and developers

Understand the possibility of effort that goes into building your design solution. Make sure they understand the reason behind designing it in a certain way. Provide a realistic example and also have an alternative way in case it’s truly infeasible.

Communicating early and establishing a clear shared understanding between you and the developer will surely save a number of redesigns, delays and will also cost less to your organization.

I have worked home after office hours:00 for almost six years, and I am not sure if I am the right person to advise on this but here are my two cents. I am not going to lie and say that this hasn’t help me to accelerate in work or career wise. But raising my head from the screen has made me think of how much I’ve missed all these years.

So, you are passionate about design. You like the complexity of a problem and are willing to put all your creativity, energy and your mind to solve it. You don’t see day or night, you are just so much in love with your work that you see nothing except that — ah, I know the feeling.

Most of my time is spent in front of the laptop WORKING. One thing we all know, is that the process of learning will never end. We have plenty of time in a day to work and be productive, and this is something that I am still working on. I still at times take my laptop and work late at night in my cave. Point being, we all can choose whether we want to work plenty or work little and smart.

This I can advise you —

“Stop doing quantity work, start doing quality work”

Don’t be satisfied with your designs, you will always do better the next day. It’s just the matter till you initiate. Set a goal that will drive to at least make that attempt…Keep working towards it — lose sleep, create, innovate, work home (doesn’t matter:) make amazing designs.

You’re doing great!


As a digital marketing professional, you understand that email marketing is only one part of a larger puzzle. For your email marketing efforts to pay off, your email subscribers need to be directed somewhere, so that certain actions can be taken.

That’s where your website’s landing pages come into play. Read on to discover the importance of landing pages, as well as how they work alongside email marketing to net you the desired results.

Guide to landing pages: what purpose do these pages serve?

A landing page is a specific web page on your website that your subscribers are directed to via various sales/marketing tactics. This can be through an email CTA or even a social media post. A landing page is different from a typical webpage because it serves a particular purpose.

For example, many of our emails and blog CTAs take leads to our request for a live demo landing page.

Example of a Campaign Monitor landing page

Source: Campaign Monitor

This page serves a single purpose: requesting a live demo of Campaign Monitor and the services available to marketing professionals. Those interested simply fill out the form and then click the “submit” CTA to get started.

So, while landing pages have a focused directive, they serve a critical role in your overall marketing strategy: to convert website visitors into new leads. If implemented correctly, a well-designed landing page is almost guaranteed to get you the conversions you’re looking for.

Your guide to different types of landing pages

Marketers understand that each offer or promotion requires its own landing page to get the attention it deserves. In fact, studies have shown that companies that increase their number of landing pages from 10 to 15 see an average increase in leads of 55%.

However, many individuals don’t understand that several different types of landing pages can and should be utilized, depending on the type of campaign being run. This has led to 48% of landing pages containing multiple offers, which can drastically decrease the overall conversion rate by up to 266%

That’s why it’s crucial to have the right landing page for each of your campaigns. Not every landing page will be a product detail page, and research shows that other landing pages typically perform better than a typical product detail page.

Product detail pages vs. all other landing pages

Source: Marketing Charts

It’s essential to consider adding a variety of different landing pages to your digital marketing strategy, and we’ve provided some information on the most popular landing pages used by marketing teams today.

Lead capture page

A lead capture page is a landing page designed to encourage website viewers to leave their personal information in exchange for a good or service. Typically, marketers begin by sending an email to new subscribers that outlines various perks of their subscription. From there, users are encouraged to click on a CTA that brings them to a landing page where they’ll fill out a form to gain access to something.

The MarketingProfs team does a good job of this. Their welcome email currently includes a link to an “exclusive look” at Nancy Harhut’s MarketingProfs B2b Forum presentation. If you click on that lead capture CTA in the email, you’re taken to the first landing page, which delivers the promised material. From there, you’re encouraged to sign up for the 2020 forum and are then asked for more information on landing page 2.

 Email marketing and landing page examples

Source: Gmail/MarketingProfs Landing Page 1/MarketingProfs Landing Page 2

Sales page

Sales pages, while some of the most relevant landing pages in your digital marketing arsenal, are the ones that are the most commonly misused.

Some of the most effective sales landing pages are longer in nature and can generate up to 220% more leads than landing pages with above-the-fold CTAs. However, what works for some may not work for all, so you should always be A/B testing your landing pages before making them live for all.

In this example, the sales page is broken up into different sections, providing viewers with options to review before making their final decision.

Example of a sales landing page

Source: Living Language via Instapage

Click-through page

Click-through landing pages are great when you’re working with a new prospect and want to warm them up to an offer. Remember the example above by MarketingProfs? That’s an excellent example of a click-through landing page because it moves the prospect from the welcome email to the initial landing page, and then to an exclusive offer landing page for the 2020 Forum.

Another great way to incorporate a click-through landing page is by using free trial offers or with a “get a quote” CTA. This encourages your consumers to click through and gives you some information to move forward with the process of learning more or getting access to the free trial.

Click-through landing page example

Source: Nationwide

Splash page

Splash pages are typically used to inform your visitor or something prior to giving them access to another landing page or blog post. This doesn’t usually ask your visitors for any information and acts more like a welcome page of sorts. Other types of splash pages could include short, quick forms to enable you to gather vital user data.

Example of a Splash landing page

Source: Forbes via Instapage

Squeeze page

Squeeze pages are designed to capture a prospect’s email address to grow a brand’s email list. These pages often pop up while you’re scrolling through a website or article, and they often ask you to sign up for the brand’s newsletter to stay in the loop without having to search the brand later.

For example, GQ includes a squeeze on its homepage. It appears as the visitor scrolls through the homepage material and encourages them to sign up to stay on top of the GQ trending stories.

Example of a squeeze landing page.

Source: GQ

Other examples of squeeze pages are those that pop up after you’ve visited a website so many times, and they require you to sign up before you can view any other content.

Example of a gated squeeze page that requires a subscription to view more content

Source: The Business Times

Guide to landing pages: design best practices

Just like any other marketing material, knowing design best practices for landing pages is an absolute must. There are many different design best practices out here; however, when it comes to landing pages, these are some of the most vital practices to keep in mind:

  • Put your audience first by designing with them in mind. That means designing for the skimmers, including images and videos, to help break up large blocks of text and making your CTAs easily identifiable and actionable.
  • Consider your own goals during the design phase. You can’t neglect your marketing goals, or else these landing pages won’t serve your brand in any way. What purpose does each page serve? What solutions will it help provide your audience members? What’s the best way to encourage action on each page?
  • Focus primarily on the benefit for your audience members. What pain points are you addressing? How’s this page/product/service going to make their lives easier/better? Don’t focus heavily on the specific features. Instead, outline how this is going to address the problem they’re seeking answers to.
  • Be as specific as you can, or else risk confusing your prospects. This is particularly important if you have multiple offers running at the same time. Remember, you want to have a landing page for each of your active campaigns. That way, there’s little chance of confusion for those clicking on links for a specific product, deal, or campaign.
  • Always run an A/B test before letting your page go live. What works for one campaign may not work for the next, so make sure you’re taking adequate time to test your landing pages for limited periods of time and track your results to see which one gets you the best results. Whichever variation wins is the one you should put up permanently.

Landing pages and email marketing work together when done correctly.

While some may believe that landing pages are strictly related to your online presence and digital marketing strategy, remember that your marketing strategy is made up of multiple puzzle pieces. Once you’ve got your landing page ready to go, you can start including them into your email marketing strategy.

For example, MacPaw does a wonderful job of creating a sales landing page that they incorporate into their holiday sales email campaign. Instead of laying out all the options for consumers, they include a 30% off CTA, and should the consumer be interested in the offer; they can click through to the sales landing page to see all the available offers.

 Example of email marketing and landing pages working together

Source: Really Good Emails/MacPaw

Wrap up

Landing pages play a vital role in your digital marketing strategy, and it’s essential to understand that not every landing page is created equally. That’s why this guide to landing pages focused heavily on the varying types of landing pages that should be incorporated into your marketing strategy:

  • Squeeze pages
  • Sales pages
  • Lead capture pages
  • Splash pages
  • Click-through pages

Ready to see what Campaign Monitor can do for you? Then request your live demo today.


According to post-holiday sales data, U.S. overall retail sales were flat or up modestly; however, some traditional retailers saw sales declines to varying degrees. By comparison, online sales were up roughly 19% according to data from Mastercard, which said total holiday retail sales increased 3.4% compared with a year ago.

Macy’s, JCPenney, L Brands, Kohl’s and even Target reported disappointing results. Macy’s said it would close nearly 30 stores, while Pier 1 is closing almost half its stores. Total store closures last year exceeded 9,000 locations.

Mindset: Close stores, push e-commerce

Following lower-than-expected in-store sales, the natural impulse for some retailers is: close stores and push e-commerce. And that may make sense in the abstract and in some instances. However too many “omnichannel” retailers don’t fully appreciate the symbiotic relationship between stores and online sales. This is reflected most significantly in the still-frequent bright line separation (often with separate P&Ls) between e-comm and store teams.

Euclid Analytics CEO Brent Franson told me in 2017 that when stores disappear, it hurts online sales as well. (Euclid was acquired by WeWork in early 2019.) I haven’t been able to find further support for that statement, but it makes considerable sense.

In-store returns boost online sales

In an off the record conversation last November, I was told by a digital commerce executive at a major female apparel retailer that the mention of in-store returns in online ads boosted e-commerce sales. Why? Because people will be more inclined to buy “sight unseen” if they’re confident then can return products in store. And according to data compiled by eMarketer, 75% of online shoppers prefer to return products in-store.

An interesting twist on that concept is the partnership between Kohl’s and Amazon. Kohl’s accepts returns of Amazon products in all its stores. Amazon benefits from more than 1,100 “return centers” (Kohl’s stores) across the U.S. For its part, Kohl’s has said that the policy brings younger buyers into the store, who often go on to buy something.

Local search drives e-commerce too

The in-store return of e-commerce products often results in additional sales. People tend to buy more in a store after returning an online product. And easy returns create support and reinforce brand loyalty.

It also goes the other way. LocalSEOGuide’s Dan Leibson said in a phone interview that many of the firm’s big box retail customers see millions of dollars in online sales being driven by local search and Google My Business. In other words, someone looks for a particular branded store “near me” and then clicks through to the website, ultimately buying something online.

“This is a thing for all major retailers,” Leibson explained. “There’s an underreported amount of revenue coming this way.” In fact Leibson believes that major retailers, paradoxically, should be thinking about local search as a potentially significant driver of e-commerce and optimize their pages accordingly.

BOPIS and consumer agnosticism

This phenomenon also plays off the “omnichannel” paradigm. Consumers are relatively agnostic about where they buy something if they’re confident they can take the product back if it’s not right. This is also reflected in the growth of BOPIS (buy online, pick up in store).

As Deloitte has argued in its “2020 retail industry outlook” report, convenience is a major incentive or issue for consumers: “Whether in the store or online, consumers want a friction-free experience, from finding ideas and inspiration to making purchases, managing returns, and advocating for the brand.”

Convenience is king for consumers

Conversely, when a store fails to deliver on that convenience it can damage shopper loyalty and even the reputation of the brand. I had such an experience with Banana Republic over the holiday when I was not permitted to return a shirt purchased online (that didn’t fit) to one of their stores. I won’t be buying anything more from Banana Republic (online or off).

So, while closing underperforming stores makes economic sense for many distressed retailers they should think carefully about which stores they close and not simply assume that customers will shift their buying behavior to the brand’s online shopping cart.

About The Author

Greg Sterling is a Contributing Editor to Search Engine Land, a member of the programming team for SMX events and the VP, Market Insights at Uberall.


LinkedIn rolled out three new Page tools designed to make it easier for businesses to engage with their customers, prospects, and employees on Wednesday.

Invite to Follow. To help businesses grow their communities and reach new audiences, Page Admins are now able to invite first-degree Profile connections to follow their LinkedIn Page. Page admins are limited to sending no more than 50 invites per day, and users have the option to opt-out of receiving invites altogether.

Users can invite first-degree Profile connections to follow their LinkedIn Pages.

LinkedIn Live Streaming. LinkedIn Live, which originally rolled out in beta last February, is now broadly available on Pages. The feature enables businesses to engage in two-way conversations on-screen and through comments with real-time participation through broadcast notifications. Over the next few weeks, LinkedIn said it will be rolling out a “stream targeting” feature that uses third-party tools (including Restream, Wirecast and Socialive), to help brands reach audiences with different languages and locations around the world. LinkedIn has also introduced private testing capabilities for Live Stream, giving brands an opportunity to rehearse and test before going live.

LinkedIn Live streaming is now available for Pages.

Post as Page or Member. With the new updates, Page Admins can choose whether they would like to post as an individual or as their organization. Admins can share an update from their profile, business Page, or directly from the platform’s homepage with a new toggle switch on the homepage.

Page Admins can select whether to post from their personal profile or from their Page.

Why we care. These updates are designed to deliver more value from the community-driven features on the platform to help brands build stronger relationships with followers, customers and employees. The new features give Page admins more ways to streamline posting from specific pages, expand audience reach with the new ‘Invite’ tool, and create impactful connections in a live streaming environment.

About The Author

Taylor Peterson is Third Door Media’s Deputy Editor, managing industry-leading coverage that informs and inspires marketers. Based in New York, Taylor brings marketing expertise grounded in creative production and agency advertising for global brands. Taylor’s editorial focus blends digital marketing and creative strategy with topics like campaign management, emerging formats, and display advertising.


We’re conducting a survey to get more insight on the roles shaping our industry.

  • More

Is your job title fairly new? What martech tools do you use from day to day? Would you say you are more of an Operations Orchestrator, or an Analytics Architect? Or neither? These are the things we want to know about you.

Please take our 2020 MarTech Career Study to help us get a better understanding of the roles shaping today’s marketing technology organizations. Conducted in partnership with MarTech Conference Chair Scott Brinker, the survey will only take five minutes of your time and will give us all more insight into the work we do as an industry.

Once all the responses have been collected, we’ll be sure to share the results. It is an anonymous survey and includes 18 questions (plus three optional questions) covering topics like:

  • Which department are you part of?
  • What are your primary responsibilities?
  • What martech tools do you use most often?
  • When was the last time you were promoted?

Please be sure to forward this quiz to your martech colleagues — the more responses we have, the more insights we can share.

About The Author

Amy Gesenhues is a senior editor for Third Door Media, covering the latest news and updates for Marketing Land, Search Engine Land and MarTech Today. From 2009 to 2012, she was an award-winning syndicated columnist for a number of daily newspapers from New York to Texas. With more than ten years of marketing management experience, she has contributed to a variety of traditional and online publications, including MarketingProfs, SoftwareCEO, and Sales and Marketing Management Magazine. Read more of Amy’s articles.


Struggling to see ROI from your martech? Eager to assemble a frictionless organization? Want to deliver user experiences that exceed customer expectations?

Attend The MarTech Conference April 15-17 in San Jose for creative, vendor-agnostic solutions to these and many more complex marketing challenges.

The agenda is live and ready for you to explore! Join a community of nearly 2,000 senior-level marketers for tactical, practical sessions and return to the office ready to…

Deliver measurable results with limited budget and resources

√ Get data quality, integration, and governance right

√ Successfully incorporate agile methods for increased efficiency

√ Infuse martech best practices into your organization

√ Create customer experiences that are personalized, relevant, and effective

√ Evaluate and invest in the technologies that deliver on all of these goals

You’ll soak up inspiration during exclusive keynotes and game-changing insights from experts, including:

  • I will address both practical and reasonable strategies for marketing operations and technology management — while embracing the promise of exciting, new innovations in my opening keynote, Leading The Next Decade Of Martech.
  • Digital analyst Brian Solis, SalesForce Principal of Marketing Insights, Mathew Sweezey, and communications expert Nancy Duarte will empower you to leverage AI to improve customer experiences, motivate buyers in an age of infinite media, and use data to tell compelling, humanizing narratives.

See the agenda!

Ready to register? Choose the pass that’s right for your goals and budget and book now to enjoy up to $900 off on-site rates:

  • All Access: Complete access to all conference sessions, keynotes, networking events, exhibitors, sponsor presentations, amenities, and more.
  • All Access Workshop Combo (best value!): Dive deeper and learn more by adding a half-day, pre-conference workshop to your itinerary.
  • Expo : Searching for martech tools and services? Focused on growing your network? Pick up a FREE Expo pass to enjoy unlimited Expo Hall access, full-length Solution Track sessions, sponsor presentations in the Discover MarTech Theater, downloadable speaker presentations, refreshments, free WiFi, and more.
  • Team Rates: MarTech is a fabulous option for your 2020 company outing. Attend as a group for a unique team-building experience and lock in extra savings while you’re at it.

Don’t miss this opportunity to learn practical advice and insights from marketing leaders who have been where you are and are ready to share what it takes to succeed.

I hope to see you there 🙂

Psst… How many marketing technology tools does your organization use? Do you know how they’re organized? Are they organized? Discover the scope and structure of your martech stack and earn the recognition of your peers: Enter the 2020 Stackie Awards!

Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.

About The Author

Scott Brinker is the conference chair of the MarTech® Conference, a vendor-agnostic marketing technology conference and trade show series produced by MarTech Today’s parent company, Third Door Media. The MarTech event grew out of Brinker’s blog, chiefmartec.com, which has chronicled the rise of marketing technology and its changing marketing strategy, management and culture since 2008. In addition to his work on MarTech, Scott serves as the VP platform ecosystem at HubSpot. Previously, he was the co-founder and CTO of ion interactive.


While search engine optimization is still one of the most important disciplines to master, pay-per-click advertising is equally essential as a skill.

No matter if a brand is looking to attract B2B or B2C prospects, PPC is one of the most effective means of achieving this goal. That said, there is a vast chasm that separates the tactics employed for optimizing each type of campaign.

Understanding these differences, as well as the necessary PPC audience targeting strategies, is what will enable sellers to reach the right consumers.

To help delineate the necessary knowledge around the differences in B2B and B2C advertisements, today, we will explore the disparities, similarities and relevant tactics for using PPC ads to connect with buyers on both ends of the spectrum.

Targeting tactics

When initiating an advertising campaign, one of the primary considerations is how to reach the right consumers. After all, if a brand is selling homeowner’s insurance, targeting those in the 18-24 age bracket is likely to produce paltry results.

Take a look at the targeting categories in Google Ads:

Speaking to B2B advertisers, a prime tactic for ensuring that the right individuals are reached is to use social media ads to target by company position. Instead of targeting users by their age or interests as a B2C campaign might, a better route would be to target users based on their job title or industry via LinkedIn or Facebook.

However, where some overlap exists is that utilizing features like Lookalike Audiences can help both B2B and B2C brands find new users who are potentially interested in what the company offers.

No matter if targeting the average consumer or business leaders, brands should create buyer personas to better understand who they are trying to reach.

Here is an example buyer persona from Buffer:

Consider the clock

Another of the main differences in B2B and B2C advertising is that B2C sellers are trying to gain purchases as quickly as possible. However, with B2B, advertisers are attempting to generate business leads and ensure their product is considered in the prolonged purchase cycle.

To achieve this goal, brands must consider the timing of their ads.

In B2B advertising, businesses are trying to reach the key players within a company, those who make decisions or are closely connected to those with such power. This means that running ads within the nine-to-five timeframe is critical as this is when these individuals are actively engaged and show the highest intent to click-through.

While B2C consumers can potentially be targeted around the clock, the same is not true for B2B prospects. Instead, ads intended to reach business prospects should only run during business hours, not only for the aforementioned reason but also because this will help to conserve the business’s PPC budget.

Given this framework, brands should employ ad scheduling and bid modifications to alter bids for certain days of the week (Monday through Friday) and times of the day. For example, if advertisers notice that they receive the highest amount of click-throughs on Tuesday mornings, it is wise to increase the cost-per-click during this window.

To do this in Google Ads simply go to Ad schedule and click Bid adjustment for whichever time frame you want to increase or decrease:

While some sellers might feel equipped to manage such tasks, most will see more benefit from partnering with an e-commerce PPC management firm that can maximize potential impressions, clicks and conversions.

Messaging modifications

Much like targeting and timing, there are substantial differences in how advertisers will speak to B2B and B2C audiences.

The fact is that B2B buyers want to engage with brands that have evident expertise and knowledge of a given industry. This means that advertisers must showcase their acumen through relevant terminology, awareness of processes and similar traits that prospects will be interested in seeing.

For instance, if a CRM software provider is looking to reel in new users, but utilizes fluffy, emotionally-driven copy to do so, there is a significant chance that they will not engage the folks they are truly after. Instead, it is necessary to build confidence in potential users with more formal, fact-based messaging that has clear implications of how a product can improve business performance.

Take a look at how Intel communicates with its audience:

However, the exact inverse is true for B2C ads. When targeting average consumers, brands are wise to employ the most relatable voice possible by utilizing straightforward language that mirrors the audience. There is little to no place for jargon in B2C advertising.

Contrary to Intel, Gerber Childrenwear’s audience of mainly parents would appreciate copy like this:

Moreover, B2C ads should trigger emotions in consumers. Neil Patel speaks to this point, writing: “An analysis of 1,400 successful ad campaign case studies found that campaigns with purely emotional content performed about twice as well (31 percent vs. 16 percent) as those with only rational content.”

This is a crucial dichotomy to recognize when producing B2B and B2C ads.

Negative keyword distinctions

In addition to targeting the audience on their proper characteristics, both B2C and B2B advertisers must understand what elements to exclude in order to reach the most relevant consumers.

The fact is that negative keywords are extremely helpful in weeding out irrelevant searches that eat up advertisers’ budgets. Naturally, the keywords and negative keywords that sellers employ are highly dependent on their specific industry and niche; however, there are some through lines that can be established for both B2B and B2C advertising efforts.

For instance, B2B brands offering a technological solution might want to exclude phrases that are commonly paired with the term “technology” such as:

  • Careers
  • Jobs
  • Hiring
  • Laws
  • Reviews
  • Free

Similarly, B2C retailers who sell new products can also immediately disqualify specific words and phrases that are not applicable to their efforts, such as:

  • Commercial
  • Bulk
  • Used

To do this in Google Ads go to Keywords and click Negative Keywords

However, to get to the core of which terms a business should add to their negative keyword lists, it is best to consult Google’s search term report to uncover phrases that drive impressions and clicks but are wholly irrelevant or fail to convert.

Despite all the differences between B2B and B2C advertising methodologies, there are some commonalities that the two marketing efforts share.

Shared traits

While B2B and B2C ads can be quite different, there are some core components to each that remain the same.

For instance, no matter which type of audience is the target, it is necessary for advertisers to conduct in-depth keyword research to understand which terms and phrases will reach their customers.

Similarly, when advertising through Google, relevance is a significant component of campaign success. Therefore, utilizing compelling landing pages that closely match the ad’s offer is necessary for both B2B and B2C spaces. When there is congruence between an ad and its destination, campaigns will earn a higher quality score.

Moreover, given that consumers are prone to shopping cart abandonment and that B2B customers require a more extended courting period than other types of consumers, developing a retargeting strategy is also a fundamental aspect of campaign success shared across B2B and B2C efforts.

Bagsy decided to utilize Facebook for their retargeting efforts:

While there are plenty of differences between targeting everyday consumers and business prospects, when it comes right down to it, PPC best practices remain intact no matter who is being targeted.

No matter if ads are used in the B2B or B2C realm, it is vital for advertisers to understand the audiences to which they speak. This means that developing buyer personas and conducting market research are key elements for promoting the awareness needed to employ the right language, messaging, targeting tactics and other vital PPC campaign components.

Once this crucial piece of information has been procured, use the strategies outlined above to help your ad campaign reach and resonate with its respective buyers.

Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.

About The Author

Ronald Dod is the chief marketing officer and co-founder of Visiture, an end-to-end e-commerce marketing agency focused on helping online merchants acquire more customers through the use of search engines, social media platforms, marketplaces and their online storefronts. His passion is helping leading brands use data to make more effective decisions in order to drive new traffic and conversions.


Third-party cookies have been living on borrowed time, given their increasing rejection by the major browsers. And today Google announced support for third-party cookies in its Chrome browser would be phased out “within two years.”

The company seeks to replace them with a browser-based mechanism as part of its “Privacy Sandbox” initiative. The Privacy Sandbox was introduced last August, following an earlier announcement at Google I/O. The initiative is arguably a response to increasing privacy pressure and partly a response to the rise of cookie-blocking by others.

Balancing personalization and privacy. Google’s stated objective is to create “a secure environment for personalization that also protects user privacy.” Google says this requires “new approaches to ensure that ads continue to be relevant for users, but user data shared with websites and advertisers would be minimized by anonymously aggregating user information, and keeping much more user information on-device only.”

The company argues that “large scale cookie blocking,” such as being done by Firefox and Safari, encourage tracking techniques like fingerprinting and undermine the publisher ecosystem by making ads less relevant, thereby reducing their revenues. The less precise the audience targeting, the lower the ad revenue.

Audience targeting strategies. The Privacy Sandbox system envisions targeting and conversion measurement happening within the browser environment through “privacy preserving APIs.” Google says that for ad targeting it’s “exploring how to deliver ads to large groups of similar people without letting individually identifying data ever leave [the] browser.” The company explains this is based on techniques and technologies such as Differential Privacy and Federated Learning. The latter would allow interest-based targeting at large-group scale to avoid revealing any individual’s information.

Conversion measurement. Here Google is more vague, saying, “Both Google and Apple have already published early-stage thinking to evaluate how one might address some of these use cases.” Reportedly, conversions would also be tracked inside Chrome and advertisers would be able to get conversion data through an API but without identifying any individual user.

Finally, Google said that starting in February, it’s going to treat cookies “that don’t include a SameSite label as first-party only, and require cookies labeled for third-party use to be accessed over HTTPS.” It’s also going to work to stop fingerprinting and other types of “covert tracking.”

Why we care. Google’s move, together with Firefox and Safari, is a major change (and challenge) for the industry. Google says it’s trying to find “a middle way” that empowers users but enables the advertising ecosystem to function effectively,” compared to what it considers the more blunt approach of Apple’s “Intelligent Tracking Prevention.”

Critics will accuse Google of trying to assert more control over digital advertising. However, for the approach to work, Google will need to build consensus among a broad community of publishers, advertisers, technology companies and even Apple and Mozilla. In principle, at least, it’s a thoughtful and reasonable approach that also plays to its strengths — a vast ecosystem coupled with powerful data collection and modeling capabilities — and will preserve its dominant position in the digital ad market.

About The Author

Greg Sterling is a Contributing Editor to Search Engine Land, a member of the programming team for SMX events and the VP, Market Insights at Uberall.

Live Webinar!

Email optimization and deliverability go hand-in-hand when it comes to sending emails, and that’s why MarTech Today created the very first resource for marketing professionals that encompasses the elements of both. The Periodic Table of Email Optimization and Deliverability is a comprehensive resource designed to guide you through the different elements required to keep your emails out of the spam folder and in front of your subscribers.

Explore this new resource with its architect, Jennifer Cannon, Senior Editor at MarTech Today, and April Mullen, Director of Strategic Insights at SparkPost, co-founder of Women of Email during this free webinar. They’ll be taking a look at some of the emerging elements and trends that brands and email marketers need to embrace in 2020, including:

  • What you need to know about BIMI (Brand Indicators for Messaging Identification)
  • Artificial Intelligence vs Machine Learning for email marketers
  • How Voice Assistants will play into how email marketers develop emails this year
  • Compliance — what you need to know about GDPR, CCPA and maintaining a compliant data set
  • The impact of AMP for Email on brands’ email marketing efforts

Don’t miss it! Register today for Emerging Elements for Email Marketers in 2020 from the Periodic Table of Email and Deliverability.

Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.

About The Author

Marketing Land is a daily publication covering digital marketing industry news, trends, strategies and tactics for digital marketers. Special content features, site announcements and occasional sponsor messages are posted by Marketing Land.


Your chance to save up to $450 on a seat at Search Marketing Expo expires this Saturday night.

Don’t wait — register now and join us February 19-20 in San Jose for an all-new agenda featuring 85 sessions organized into three lanes with no limits: SEO, SEM/PPC, and digital commerce marketing.

Breaking news! Benjamin Spiegel – Chief Digital Officer, P&G Beauty at Proctor and Gamble – will join Dana Tan – Sr. Manager, Global SEO, Under Armour, and Search Engine Land Editor-in-Chief, Ginny Marvin for a fireside keynote chat about increasing digital commerce sales at the intersection of SEO, PPC, and social. See the agenda!

Make this the year you drive more awareness, traffic, conversions, and sales with actionable tactics and fresh insights from the industry’s top experts.

If you’re planning to attend SMX West, do yourself and your wallet a favor — book by this Saturday, January 18, and enjoy serious savings! Once these rates are gone, they’re gone.

Register now and I’ll see you in San Jose!

About The Author

Lauren Donovan has worked in online marketing since 2006, specializing in content generation, organic social media, community management, real-time journalism, and holistic social befriending. She currently serves as the Content Marketing Manager at Third Door Media, parent company to Search Engine Land, Marketing Land, MarTech Today, SMX, and The MarTech Conference.



Skeuomorphism / Neumorphism UI Trend is a term most often used in graphical user interface design to describe interface objects that mimic their real-world counterparts in how they appear and/or how the user can interact with them. A well-known example is the recycle bin icon used for discarding files. Skeuomorphism makes interface objects familiar to users by using concepts they recognize.

Skeuomorph Mobile Banking by Alexander Plyuto
Skeuomorph Mobile Banking | Continuation BY ALEXANDER PLYUTO
Dashboard user interface by uixNinja
Skeuomorph investing app dashboard by Jordan Hughes
Skeuomorphism / Neumorphism UI Trend - TIMER CONCEPT BY YASH BHAGAT
Skeuomorphism / Neumorphism UI Trend -Timer Concept by Yash Bhagat 
Skeuomorph Clock App by Jatin Lathiya
Skeuomorph Banking App by Mikołaj Gałęziowski
Skeuomorph File Manager App | Dark Mode by Imran Hossen 
Hello Dribbble 1 Dribbble invite giveaway BY KHUSHBOO CHOUDHARY
Mobile Crypto Wallet Skeuomorph by theifox
Skeuomorph Fitness App | Apple Watch by Mais Tazagulyan
Skeuomorph styled Instagram UI concept by Pʌvʌn
Dashboard Cryptocurrency by Rudi Hartono
Skeuomorph Banking App Light Mode by Mikołaj Gałęziowski 
Smart Home App by Jawadur Rahman for Hyper Lab 
Skeuomorph Smart Home Application by Arun PP
Skeuomorph Refund Flight ⠿ tiket.com by Rian®
Skeuomorphic Music Player by Max ⚡️ Osichka
Neumorphic Bank Redesign in Dark and Light mode by HYPE4
Neumorphic Savings App Concept by Ali Kemal
Skeuomorph Food Delivery App by Virgil Caffier
Rose Gold Music Player by Ohad Peled
Skeuomorph White 3D Expo by Yarolav Hrynovets 
Untappd App Redesign by Lena Starodub
Skeuomorphic Dashboard by Olya Marchak for Cieden
Conceptual Dashboard Screen by Ohad Peled
Skeuomorphism / Neumorphism UI -SKEUOMORPHIC APP BY MAX SIROTYUK
Skeuomorphic App by Max Sirotyuk
Skeuomorph Dashboard by Darya Darya
Skeuomorphism / Neumorphism UI -SKEUOMORPH MUSIC APP $ BY MARTINGARRIX404040
Skeuomorph Music App $ by Martingarrix404040
Skeuomorph Remote Car Control App by Bogusław Podhalicz
Skeuomorphism / Neumorphism UI -TIMER APP BY MAX SIROTYUK
Timer App by Max Sirotyuk
Design Experiment: Skeuomorph E-wallet App (Dark version) by Beatnik Team
Skeuomorph App User Interface. by Noman Ejaz
Skeuomorph Music App by Natalie Yakovleva
Skeuomorphism App by Arthur Mineev
Telco Skeumorph App (FREE KIT) by lorenzo perniciaro
Skeuomorph Button Interaction by Yash Bhagat
Skeuomorphism / Neumorphism UI - SKEUOMORPH BY OLEG VOLOVNYK
Skeuomorph by Oleg Volovnyk

You May Also Like

Read More


Outstanding murals by Dave Arcade

  • January 13, 2020

In the most general sense, Dave creates an organized commotion. It’s a style born of his bossy imagination bullying his inner minimalist.


A/B testing began with beer.

At the turn of the 20th century, William Sealy Gosset was exploring new ways of running experiments on his production line. Gosset was trying to improve the quality of Guinness’s signature stout but couldn’t afford to run large-scale experiments on all the variables in play. Fortunately, in addition to being an astute brewer, Gosset was a statistician; he had a hunch there was a way of studying his small snatches of data to uncover new insights.

Gosset took a year off from his work to study alongside another scientist, Karl Pearson. Together, they developed a new way of comparing small data sets to test hypotheses. They published their work in the leading statistics publication at the time, Biometrika. In “The Probable Error of a Mean,” the t-test, a cornerstone of modern statistical analysis, was born.

Gosset’s scientific approach was the foundation of a 38-year career with Guinness. He invented more ways of using statistics to make business decisions; founded the statistics department at Guinness; led brewing at Guinness’s newest plant in 1935; and finally, in 1937, became the head of all brewing at Guinness.

Since its early days as a tool of science (and beer), statistical decision-making has gone supernova. Today, it is used by every major tech company to make hundreds of thousands of decisions every year. Data-driven tests decide everything from the effectiveness of political ads to a link’s particular shade of blue. New methods like Fisher testing, multivariate testing, and multi-armed bandit testing are all descendants of Gosset’s early innovations. The most popular of these statistical tests is one of the oldest: A/B testing.

An A/B test is a measurement of what happens when you make a single, small, controlled change. In product design, this means changing something in the interface, like the color of a button or the placement of a headline. To run an A/B test, show an unchanged version of the interface (“A”) to a randomly-selected group of users. Show a changed version (“B”) to another randomly-selected group. Measure the difference in the behavior of the two groups using a t-test, and you can confidently predict how the changed version will perform when shown to all your users.

A/B tests are easy to understand, which explains their popularity in modern software development. But their simplicity is deceptive. The fundamental ideas of A/B tests contain a paradox that calls their value into question.

Fredkin’s paradox

In The Society Of Mind, Marvin Minsky explored a phenomenon that I experience every day as a designer: people often prefer one thing over another, even when they can’t explain their preference.

We often speak of this with mixtures of defensiveness and pride.

“Art for Art’s sake.”

“l find it aesthetically pleasing.”

“l just like it.”

“There’s no accounting for it.“

Why do we take refuge in such vague, defiant declarations? ”There’s no accounting for it” sounds like a guilty child who’s been told to keep accounts. And “I just like it” sounds like a person who is hiding reasons too unworthy to admit.

Minsky recognized that our capriciousness serves a few purposes: We tend to prefer familiar things over unfamiliar. We prefer consistency to avoid distraction. We prefer the convenience of order to the vulnerability of individualism. All of these explanations boil down to one observation, which Minsky attributes to Edward Fredkin:

Fredkin’s Paradox: The more equally attractive two alternatives seem, the harder it can be to choose between them—no matter that, to the same degree, the choice can only matter less.

Fredkin’s Paradox is about equally attractive options. Picking between a blue shirt and a black shirt is hard when they both look good on you. Choices can be hard when the options are extremely similar, too — see the previous link to Google’s infamous “50 shades of blue” experiment. The paradox is that you spend the most time deliberating when your choice makes no difference.

Parkinson’s law of triviality

In 1955, C. Northcote Parkinson wrote a book called Parkinson’s Law. It’s a satire of government bureaucracy, written when the British Colonial Office was expanding despite the British Empire itself shrinking. In one chapter, Parkinson describes a fictional 11-person meeting with two agenda items: the plans for a nuclear power plant, and the plans for an employee bike shed.

The power plant is estimated to cost $10,000,000. Due to the technical complexity involved, many experts have weighed in. Only two attendees have a full grasp of the budget’s accuracy. These two struggle to discuss the plans, since none of the other attendees can contribute. After a two-minute discussion, the budget is approved.

The group moves on to the bike shed, estimated to cost $2,350. Everyone in the meeting can understand how the bike shed is built. They debate the material the roof is made of — is aluminum too expensive? — and the need for a bike shed at all — what will the employees want next, a garage? — for forty-five minutes. This budget is also approved.

This allegory illustrates what’s called “Parkinson’s law of triviality”:

The time spent on any item of the agenda will be in inverse proportion to the sum [of money] involved.

We can generalize Parkinson’s law: The effort spent discussing a decision will be inversely proportional to the value of making that decision.

When faced with two similar alternatives, Fredkin’s paradox predicts you’ll have a hard time choosing. This is when A/B testing is most appealing: A/B tests settle debates with data instead of deliberation. But our generalization of Parkinson’s law of triviality says that this kind of A/B testing — testing as an alternative to difficult decisions — results in the least value.

Most of the time, A/B testing is worthless. The time spent designing, running, analyzing, and taking action on an A/B test will usually outweigh the value of picking the more desirable option.

Alternatives to A/B testing

Instead of A/B testing, I’ll offer two suggestions. Both are cheaper and more impactful.

Alternative 1: observe five users

Tom Landauer and Jakob Nielsen demonstrated in Why You Only Need to Test with 5 Users that insights about design happen logarithmically — that is, the first five users you study will reveal more than 75% of your usability issues. Doing a simple observation study with five users is an affordable way of understanding not just how to improve your design, but also why those improvements work. That knowledge will inform future decisions where a single A/B test can’t.

Alternative 2: A → B testing

The cheapest way to test a small change is to simply make that change and see what happens. Think of it like a really efficient A/B test: instead of showing a small percentage of visitors the variation and waiting patiently for the results to be statistically significant, you’re showing 100% of visitors the variation and getting the results immediately.

A → B testing does not have the statistical rigor that A/B testing claims. But when the changes are small, they can be easily reversed or iterated on. A → B testing embraces the uncertainty of design, and opens the door to faster learning.

When A/B testing is the right tool for the job

A/B testing is worthless most of the time, but there are a few situations where it can be the right tool to use.

  1. When you only have one shot. Sales, event-based websites and apps, or debuts are not the time to iterate. If you’re working against the clock, an A/B test can allow you to confidently make real-time decisions and resolve usability problems.
  2. When there’s a lot of money on the line. If Amazon A → B tested the placement of their checkout button, they could lose millions of dollars in a single minute. High-value user behaviors have slim margins of error. They benefit from the risk mitigation that A/B testing provides.


If there’s a lot on the line in the form of tight timelines or lots of revenue at stake, A/B testing can be useful. When settling a debate over which color of button is better for your email newsletter, leave A/B testing on the shelf. Don’t get caught by the one-two punch of Fredkin’s paradox and Parkinson’s law of triviality — avoid these counterintuitive tendencies by diversifying your testing toolkit.


Pack up your things, Windows 7 users. It’s time to move on, as Microsoft today ends support of the venerable OS. So to the millions of people still running the OS: now’s really, truly the time to upgrade.

According to Microsoft’s end-of-support article for the OS, your Windows 7 computer “will still function but Microsoft will no longer provide …. Technical support for any issues…. Software updates… [or] Security updates or fixes.” In fact, it’s almost laughably passive-aggressive in its nudge to get users to buy a new PC that can run Windows 10:

While you could continue to use your PC running Windows 7, without continued software and security updates, it will be at greater risk for viruses and malware. Going forward, the best way for you to stay secure is on Windows 10. And the best way to experience Windows 10 is on a new PC.

It’s been over ten years since Microsoft released Windows 7, and I have fond memories of installing it for the first time as a teenage stripling and enjoying how much less finicky it was than Vista. And it seems I wasn’t the only one: data from Netmarketshare (via The Verge) shows the OS is still running on 26 percent of modern PCs.

They can’t exactly say they didn’t see this coming. Microsoft announced almost a year ago that today would be the final day of Windows 7 support. And it’s not a good idea to remain on an unsupported OS, if only because you won’t get any more security upgrades.

There is some hope for the many businesses that still run on Windows 7 PCs. Microsoft offers Extended Security Updates (ESU), for a price, which will support Windows 7 for the next three years, though the word is that this will get more and more expensive for them as time goes on. The ESUs won’t be available for individual users. So basically, upgrade or get left behind, is the message Microsoft is sending.

If you’re one of the people finally, begrudgingly making the transition to Windows 10, Microsoft is also quick to add that its “the most secure Windows ever built,” and that it’s “the perfect operating system for personal and household use.”

Read next:

Spotify solves dispute with Warner Chappell group in India (Updated)


What is this ?

I am a big fan of react-spring, a spring-physics based animation library. It has quickly become my go-to library for UI animation in React projects. However, as a newbie to spring-based animations, I’ve had a hard time visualizing which effect the settings would have.

So, I made this visualizer in order to help me determine the correct spring config for my animations. I hope it is of some help to someone else.

The settings.

Explanation of components of spring physics


Imagine a fixed spring with a bob on one end, the mass value is the mass of the bob (). While not technically the same, in this instance you can think of it as the weight of the bob.

The higher the mass, the longer it takes for the animation to come to a rest.


Imagine a fixed spring with a bob on one end and a certain spring length when the spring is at rest. Now pull the bob downwards; the distance between the rest position of the spring and the end of the spring is the tension ().

The higher the tension, the faster the animation will be.


Imagine a spring as before (), but there is a certain amount of friction in the air. If the friction is higher, there’s more power on the spring necessary to pull the bob to its resting position.

The higher the friction, the slower the animation will be. If the friction is low enough, it the bob will overshoot its position at rest. This creates a bouncy effect.


If the bob () bounces around its resting point, it will continue to do so for some time. The precision determines when to quit bouncing and stop the animation at the resting point.

By keeping the precision low, the animation will be more accurate and it might take longer to reach equilibrium.

Personally I haven’t found much use for this setting. Please, let me know if I’m missing something!


Imagine pushing the bob () upwards or downwards while releasing it. The speed with which you push the bob is the velocity. Pushing downwards is a negative velocity, pushing upwards is a positive velocity.

Sometimes it’s nice to give a little negative velocity to create a sense of anticipation.

If a component is coming from off-screen, it is can be pleasant to give it a positive velocity, so it looks like the component is coming from far away.


Imagine putting an solid barrier just above the resting position (), so the bob can not overshoot the resting position. There will be no bounce and the animation comes to a stop immediately.

You might want to use this when animating opacity, a bounce effect is probably not what you are looking for. The same goes for animating to a scale of 0, usually you don’t want to show a negative scale, which would flip the animated component.

Another use-case is when you want to move a component to off-screen, clamp it so the animation doesn’t continue while the component is off-screen.


React-spring comes with a set of sweet presets. You can select these to preview them.

Color scheme from Happy Hues bij Mackenzie Child.


Navigation in website design does not undergo many changes. It includes almost the same elements, always meets the audience from the outset and modestly takes its place in the top header. Since the popularization of the hamburger button, nothing earth-shattering has happened. However, this does not mean that its life is boring and uneventful.

As an integral element of the UI and a crucial part of good user experience, navigation never abstracts itself from design or distances from trending solutions. It always manages to blend in, but not get lost in the spectacle that awaits in the modern hero area.

Designers conduct various experiments to adapt it to new realms to come up with exciting solutions. And one such attempt to create navigation that meets the mainstream results in ultra-narrow, sticky vertical navbars.

We have already seen narrow sidebars. However, this time it differs a little bit.

The renewed version is more compact, minimal and elegant. It bets on popular features like vertical lettering. And most importantly, it sticks to the screen and follows the user through the website, complying with all the basic requirements of good navigation.  Let’s consider some representative examples.

The Web Designer Toolbox

Unlimited Downloads: 1,000,000 Web Templates, Themes, Plugins, Design Assets, and much more!

Austin McKinney

Our first stop is the personal portfolio of Austin McKinney. Here, the vertical navbar includes only the essentials: a hamburger button, links to profiles in social media platforms, the name and occupation of the artist.

Thanks to its white, clean, and ultra-narrow design, it perfectly blends into an overall light environment with a powerful geometric vibe. It creates a necessary anchor for online visitors, providing them with helpful information.

Example from Austin McKinneyEditorial New

The team behind Editorial New shows us that the solution is quite universal and works regardless of coloring, stylistic options or interactive details. What’s more, it is an ideal partner for websites that position themselves as online magazines.

Here the navbar includes just one element – the hamburger button. It opens a slide-out menu with a table of contents. Since it sticks to the left side of the screen, wherever users are on the page, they always have quick access to links in order to jump from one section to another.

Note, unlike the previous example, here the vertical navbar is a part of the perimeter navigation that makes the design look complete.

Example from Editorial NewThe Crowdfunding Formula

The vertical navbar of Crowdfunding Formula is located on the right side. Although we are not accustomed to seeing it there, nevertheless it works perfectly. What’s more, thanks to the contrasting orange color that marks the logotype, selected menu link, and search button, the sidebar naturally catches an eye.

Note, it does not have any borders: it just seamlessly flows into the design. However, thanks to the boxy vibe that runs throughout the page and generous amount of whitespace, it gets its own place under the sun.

Example from The Crowdfunding FormulaNSDI

The team behind NSDI uses a vertical navbar for displaying not just the hamburger button, but also pagination for a full-screen slider. Another distinctive feature is that all elements move with the user. Here, you can see an ultra-narrow vertical navbar and corner navigation that co-work together in order to create a comfortable user experience.

Much like in the case of The Crowdfunding Formula, here navigation does not have a distinctive background. However, the team has used contrasting colors to naturally set the elements off from the composition.

Example from NSDICervelo Cycles

Cervelo Cycles has a more-or-less traditional sidebar, though this fact does not stop it from looking trendy and refreshing. It covers all the menu links, logotype, search field and CTA. It is relatively wide and stands out from the content flow due to its solid black background.

Its key feature lies in a small transformation. The sidebar morphs into an ultra-narrow vertical line with everything hidden inside once the user starts to scroll down. The minimized version has only the hamburger button in the center. However, it is precisely what is needed for online visitors to avoid getting lost in the content-heavy environment.

Example from Cervelo CyclesVilla Covri

The official website of Villa Corvi has not one but two ultra-narrow vertical navbars. The first one includes only social media links displayed in vertical rhythm, whereas the second one comprises the button to the inner menu and nameplate. Along with the upper header, these two strengthen the subtle boxy vibe of the website.

Note, only the right panel sticks to the screen and moves along with users. It transforms into a horizontal line on tablets and cellphones to create consistency across various devices.

Example from Villa CovriTedCo

There are two important takeaways from the official website of TedCo.

First, much like in the case of Editorial New, here the vertical navbar is a part of the corner navigation. It looks neat, elegant and informative. It finishes off the sophisticated design of UI.

Secondly, the path of exploring the website is unconventional. Instead of moving down, users scroll to the right along the X-axis. Here the vertical navbar stays where it is, giving users the necessary focal points to feel comfortable.

Example from TedCoRogue Studio

The navbar in Rogue Studio is an excellent example of the solution where all the navigation links are exposed to the audience. On top of that, the creative team managed to gracefully include logotype and social media icons.

Although the component looks more cluttered in comparison to the previous ones, it has one significant advantage over them. It gives users what they need right here, right now without unnecessary moves.

Example from Rogue Studio


Tom RayTom Ray

Published: January 14th 2020

Updated: January 14th 2020

This is a quick start guide to learning BEM, the component-driven CSS methodology.

If you want to start practicing and applying BEM to your projects, this guide will help you get started.

Ready? Let’s dive in:

BEM Overview

BEM (Block-Element-Modifier) is a CSS naming convention developed by the team at Yandex to improve scalability and maintainability in web development.

Put simply, the idea of BEM is to “divide the user interface into independent blocks” by naming CSS classes in the following methodology:

/* Block component */
.card {}

/* Elements are dependent on their parent block */ 
.card__img {}

/* Modifiers are for incremental style changes */
.card--dark {} 
.card__img--large {}
  1. Block: an independent component that can be reused (e.g. with class name .nav)
  2. Element: a child within a block that cannot be used separately from that block (e.g. with class name .nav__item)
  3. Modifier: a variation in the style of either a block or modifier (e.g. with class name .nav--dark)

Let’s dive into some real CSS examples to get a hang of this thing.


Blocks are reusable components. Like buttons, cards or form fields.

When naming your blocks, focus on describing its purpose (i.e. what it is) rather than its state (i.e. what it looks like).

For example, .btn or .nav follows the correct naming convention for a block.

.big or .bright-pink describes how it looks, so doesn’t scale well when you want to change the design later on.







If you’re wondering how to place blocks within blocks (for example, a button inside a nav), here’s a short article to help you with that.


Inside blocks are where elements live. Elements are dependent on their parent block, and so cannot be used without them.

Elements also have a unique CSS class naming convention which works like this:


For example, using the .card component, an element inside the card component (like an image) would have a class name like .card__img.

The element name always appends the block name, separated by a double underscore __.







It’s important to note that the second code snippet avoids using more than 1 selector to target the styles (e.g. like .card img {}).

It’s considered best practice to use a BEM element class and use that directly instead (like .card__img {}).

Following this approach reduces the chance of cascade issues down the line.


When you have varying styles in blocks (or elements), that’s where modifiers come in.

For example, your ‘card’ block might have a light and dark version. Or you might have primary and secondary buttons.

Modifiers have a unique CSS naming convention which works like this:

block--modifier or block__element--modifier.

That’s right- BEM modifiers can be applied to both blocks and elements.

Let’s dive into some bad and good practices:




It’s considered bad practice to use a modifier class in isolation (i.e. without the block or element class).

That’s because the modifier is meant to add incremental style changes to the block.

Therefore, whenever using a modifier, ensure it’s used with the base class:




And that’s it!

Those are the fundamentals to get you off and running with BEM.

If you’re interested to learn more about the ‘why’ behind BEM, I recommend checking out this CSS Tricks article.

Like learning anything new, practicing is key. Give BEM a shot in your next project and see where it takes you!

Download The Free BEM Cheat Sheet

Want to start practicing BEM and looking for a no-nonsense, quick start action guide?

Download a free cheat sheet covering BEM basics so you can dive in and start practicing today.



Brightcove, a cloud solution for managing and monetizing video content, has released Brightcove Campaign, a video campaign app that lets marketers create, manage and analyze their video demand generation campaigns all in one tool. The new solution comes with analytics and benchmarks so that marketers can compare their campaigns to industry standards. It also provides optimization tips, click-to-publish capabilities across multiple channels and integrations with marketing automation platforms Eloqua, Marketo, HubSpot and Salesforce.

Why we care

With video playing a larger role in demand generation campaigns, the ability to create, analyze and optimize those efforts across channels from a single tool can help simplify marketers’ management efforts. The tools was developed with input and feedback from customers. “Throughout the development process phase of Brightcove Campaign, we worked with many demand generation marketers to ensure it fits seamlessly into their workflows,” said Brightcove CTO Charles Chu.

“The ability to tag my video assets and see how they stack up against similar videos in the industry takes the guesswork out of how my campaign is performing,” said Demandbase’s Senior Director of Digital Marketing Mimi Rosenheim, who was among the marketers that provided input during development.

More on the news

  • Brightcove Campaign also has a Google Chrome extension to see analytics as well as customized thumbnail codes for email distribution.
  • The app can integrate with Google Analytics and Adobe Analytics.
  • Founded in 2004, Brightcove acquired Ooyala’s online video platform in February 2019 for $15 million — a move aimed at accelerating its video innovation efforts.

About The Author

Amy Gesenhues is a senior editor for Third Door Media, covering the latest news and updates for Marketing Land, Search Engine Land and MarTech Today. From 2009 to 2012, she was an award-winning syndicated columnist for a number of daily newspapers from New York to Texas. With more than ten years of marketing management experience, she has contributed to a variety of traditional and online publications, including MarketingProfs, SoftwareCEO, and Sales and Marketing Management Magazine. Read more of Amy’s articles.


Five years ago, marketing technologist roles were, arguably, in their infancy. At the risk of being nostalgic, it was a simpler time for our industry: the martech landscape was comprised of a mere 1,000 solutions and nobody had even heard of Cambridge Analytica. Adobe hadn’t bought Magento or Marketo and LinkedIn hadn’t yet joined the Microsoft family.

Fast forward to today, as we approach our seventh annual MarTech Conference, that martech landscape now has over 7,000 solutions, with new product launches and integrations happening daily. As the martech taxonomy continues to expand, so do the roles charged with implementing and managing our martech stacks. Marketing technologists are no longer considered “the outsider” among the broader marketing organization, but instead play a key role within the marketing organization. Martech is now marketing and marketing technologists are, simply, marketers.

“The field of marketing technologists has expanded enormously over the past five years,” said Scott Brinker, MarTech Conference chair and the voice behind ChiefMartec.com. He accounts the exponential growth to the massive adoption of martech that is happening across organizations of all sizes. “Gartner recently stated 26% of enterprise marketing budgets are being allocated toward martech,” said Brinker, “You need talented people to implement and harness all that technology, and there’s a lot of room for specialization.”

Specialization is a key factor when considering the marketing technologist role — the more specialized marketing technology solutions become, the more talent is needed to take full advantage of the available platforms and solutions. But when looking at the growing list of marketing technologist titles across the ever-widening martech landscape, it is crucial we understand as an industry which roles are the primary drivers of marketing technology and its place within the marketing organization. Of course, there are the leaders — the chief marketing technologists and other C-level executives driving the martech ship — but how have roles evolved since we first started separating marketing technology from the IT department?

Marketing technologist roles: v2.0

Five years ago, Brinker came up with a list of six primary marketing technologist roles. The roles, or “archetypes” as Brinker labeled them, were based on a survey he and former SapientNitro CTO Sheldon Monteiro conducted via Chiefmartec.com readers and attendees at the inaugural MarTech Conference in Boston. After a recent conversation with Marketing Land’s VP of Content, Henry Powderly, Brinker decided it was time to revisit the roles he had defined more than five years ago.

“Coming back to the concept of marketing technologist archetypes five years later, the split between ‘marketing focus’ and ‘technology focus’ didn’t resonate as much because technology has become much more deeply infused into the marketing organization overall,” said Brinker, ” I decided to step back and try a fresh approach to identifying the dimensions on which different marketing technologist roles focus.”

In the latest version of Brinker’s marketing technologist archetypes, the list has been narrowed from six to four roles: Operations Orchestrator, Brand/Demand Builder, Analytics Architect and Marketing Maker.

Scott Brinker’s Marketing Technologist Archetypes

One of the things that stood out to Brinker when revisiting his original marketing technologist archetypes, was the lack of marketing operations roles.

“Marketing operations has grown tremendously as a discipline over the past five years as a real hotbed of marketing technologist talent,” said Brinker. The explosive growth happening within marketing operations can be attributed to the fact that, as a function, it is what “keeps the trains running” for marketing technology teams, according to Brinker.

The four primary marketing technologist roles

As the number of martech solutions continues to grow — martech roles have become more systematic in the ways they are connected. Brinker recalls, during the original concept of the martech archetypes, he felt somewhat lost when trying to connect the roles. In Brinker’s latest iteration, the four quadrants separating the martech roles are independent of each other, but it’s clear how the roles are connected now.

The Brand/Demand Builder is the usually the marketer using martech to conduct their work, implementing different platforms to run and manage marketing campaigns. Brinker says the vast majority of marketing technologists fall into this category.

The Operations Orchestrator is responsible for implementing and managing martech systems. They are the “maestros” according to Brinker, the ones who support all the other martech roles and are often given a “marketing operations” or “CRM/MAP admin” title.

Brinker defines the Analytics Architect as “modellers” who focus on the structure and infrastructure of data collected by the marketing organization. Usually known within the team as the “marketing analyst,” “data scientist” or “data engineer,” the analytics architects are rarely found at smaller companies, and instead, are part of martech teams within larger enterprises with the resources to dive into the data.

The Marketing Maker, located in the bottom right quadrant, is the builder of custom apps and digital experiences. They have titles like “web developer” and “marketing engineer” and are usually part of the teams working with code. Although, with the latest crop of no-code and low-code martech solutions, Marketing Makers don’t necessarily have to be the expert coders they once were.

The martech leaders

What’s not listed within these four quadrants are the leaders who oversee the entire martech organization — the executives defining strategy and aligning marketing technology goals with the overall marketing and business objectives. Brinker has devised a fifth archetype — The Manager — to fit this role, an executive who essentially oversees the breadth of the marketing technology and operation teams.

Some businesses have added this leadership role to the C-suite, hiring chief marketing technology officers to work alongside their chief marketing officer. But lately, we’ve seen a trend with major brands dropping the CMO role for chief digital officers and chief customer officers — both of which often oversee the marketing technology function. (Sheldon Monteiro, who helped Brinker come up with the original marketing technologist roles in 2014 is now a chief product officer at Publicis Sapient.) Other organizations have opted to onboard vice presidents or directors of marketing technology and marketing operations who report to the CMO.

How the archetypes align with each other

Brinker believes everything in marketing should ultimately be centered around the customer. “That said, there is a lot of marketing technologist type work that serves internal stakeholders in the service of building a great customer-centric business. It’s the ‘back-stage’ workflows, processes, analytics, infrastructure, systems, etc. that enable customer-facing activities in marketing to be more successful,” said Brinker.

With this in mind, he arranged the four marketing technologist roles along an X and Y axis. The Y-axis, which moves from process orientation to technology orientation, separates processes like workflows and customer journeys from technology capabilities such as data engineering and coding. The X-axis stems from the question: Does the role primarily serve internal stake holders or customers?

“There’s a ton of marketing technologist work that touches customers directly on the ‘front-stage’ of the business,” said Brinker, “Personalized campaigns, web and mobile apps, chatbots, conversion optimization — the X-axis in this framework looks across that spectrum of internal orientation to external orientation activities because, while they are deeply entwined, they are different kinds of activities that apply different skills.”

Brinker acknowledges the four archetypes attached to each of the quadrants are not always completely separate roles, and that nearly every marketing technologist connects across all of the quadrants to some degree. The newly defined roles are meant to show how marketing technologists, in general, lean toward distinctly different areas within the martech organization.

A work in progress

The martech industry is pushing forward at a tremendous speed. As stated earlier, the marketing technology landscape is more than seven-times the size it was in 2014, with new solutions — and integrations — being launched daily. The first half of 2019 saw 246 mergers and acquisitions, a steep rise from the 162 deals that happened during the same time period in 2018.

Exponential growth is the nature of technology and martech is no different — every new iteration of a martech solution aims to improve upon itself, resulting in an accelerated rate of progress. And with every new evolution cycle, the marketing technologists tasked with managing it all will have to evolve as well. There is no “final” list of primary marketing technologist roles — as the industry changes so will the players.

As Brinker so eloquently puts it when looking at how these roles will continue to evolve, “Everyone has a horizon that keeps pushing the industry forward.”

About The Author

Amy Gesenhues is a senior editor for Third Door Media, covering the latest news and updates for Marketing Land, Search Engine Land and MarTech Today. From 2009 to 2012, she was an award-winning syndicated columnist for a number of daily newspapers from New York to Texas. With more than ten years of marketing management experience, she has contributed to a variety of traditional and online publications, including MarketingProfs, SoftwareCEO, and Sales and Marketing Management Magazine. Read more of Amy’s articles.


As marketers, we value our special skills, or as I often refer to it, our superpowers. I always explain that my superpower is writing because sometimes it’s easier for me to express my thoughts this way rather than talking (although I have my moments with the gift of gab).

So while guarding our superpowers and being really sharp with one particular skill is great, to get hired on an agile marketing team, you need to show that you’re willing to dabble outside of your comfort zone and be a team player.

According to the 2019 State of Agile Marketing survey, 50 percent of traditional marketing companies are planning to adopt agile marketing in the coming year. To stay ahead of the curve and market yourself to these companies, you’ll need to broaden your skills to be hired on an agile marketing team.

The most successful agile marketers offer the following:

Dabble beyond your specialty

People have a lot more skills, abilities and desires to try new things than we give them credit for. The problem in most companies is that you’re only known for your job title, not always what you can do or want to be doing.

Too often I hear, “Only John can do video editing – it’s a technical specialty and anyone else would screw it up.”

Well, what happens when John goes on vacation? Or gets sick? Or worse yet – quits the company.

While video editing may be John’s superpower, there are a lot of smart people on your team. If you ask them, there may be another who does this type of work as a hobby and wants to learn more.

Sure, John may be better at it – and we’re not trying to de-value his knowledge – but a good agile team is able to share knowledge and help each other.

I encourage you to think about two other skills that your company needs and try to learn more about them. Maybe you won’t be the expert, but you can help round out the team’s skillset in a pinch.

Put the needs of the team above your own

If you want to be hired on an agile marketing team, you have to approach everything as “What’s best for the team?” rather than “What’s best for me?”

On an agile marketing team, work is prioritized in a single marketing backlog and the goal is to get the most important work done. Most of the time, that work item takes many people from the team collaborating together.

There are two kinds of people that don’t work well in agile marketing—the hoarders and the single laners.

The hoarder is the so-called ‘expert’ and likes being in a position where everyone needs him. He won’t share knowledge and works at protecting his domain.

The single laners only do the job they were hired to do and nothing more.

The problem with both the hoarders and single laners is that the focus is on themselves, not the good of the team. But in agile, the collective need to do what’s best for the customer far outweighs the individual contributor.

So, if you want to work on an agile marketing team, you’re going to have to keep the focus on the team, even if it’s not what’s best for you at all times.

A willingness to venture into the unknown

Agile marketing requires the ability to quickly switch gears, work in a way that’s totally new and different and may require you doing work you’ve never done before in your life!

For those that thrive on learning and change, agile marketing is going to be your best friend.

It’s a lot like being a small business owner. You’re going to wear many hats, do things that you’ve never done before, venture into uncharted waters, fail sometimes and learn along the way.

So to be a good agile marketer, get yourself comfortable with the uncomfortable.

If you’re someone who enjoys broadening your skills, works well in a team setting and is okay venturing into the unknown, you’re going to make a rock star agile marketer!

Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.

About The Author

Stacey knows what it’s like to be a marketer, after all, she’s one of the few agile coaches and trainers that got her start there. After graduating from journalism school, she worked as a content writer, strategist, director and adjunct marketing professor. She became passionate about agile as a better way to work in 2012 when she experimented with it for an ad agency client. Since then she has been a scrum master, agile coach and has helped with numerous agile transformations with teams across the globe. Stacey speaks at several agile conferences, has more certs to her name than she can remember and loves to practice agile at home with her family. As a lifelong Minnesotan, she recently relocated to North Carolina where she’s busy learning how to cook grits and say “y’all.”


LinkedIn is growing faster than previously expected, according to a new report from research firm eMarketer. In 2019, the number of monthly U.S. adult users increased by 8.8% year over year – up from eMarketer’s previous estimate of just over 7%. The new numbers are based on updated membership data from parent-company Microsoft and other sources.

Despite a faster growth rate forecasted for 2019, eMarketer projects slowing user growth in the years ahead. U.S. users are expected to grow by 6.2% this year to 62.1 million and reach 68.8 million by 2022, eMarketer forecasts.

LinkedIn Users in the US, 2018-2022

Why we care

LinkedIn users make up around one-third of all social network users in the U.S., according to eMarketer. For the most part, that number will stay the same as other social platforms also expand their user bases. But for businesses on LinkedIn, user growth could mean an uptick in ad performance and content engagement. The Microsoft-owned network has been steadily building out ad capabilities including new targeting and format options over the past few years.

More on the news

  • The new numbers are based on updated membership data from parent-company Microsoft and other sources, eMarketer said.
  • eMarketer’s estimate is derived from U.S. internet users, aged 18 , who access LinkedIn from any device at least once per month.
  • eMarketer reports that LinkedIn will see $1.59 billion in ad revenues in 2020, growing another 11.2% to $1.77 billion in 2021.

About The Author

Taylor Peterson is Third Door Media’s Deputy Editor, managing industry-leading coverage that informs and inspires marketers. Based in New York, Taylor brings marketing expertise grounded in creative production and agency advertising for global brands. Taylor’s editorial focus blends digital marketing and creative strategy with topics like campaign management, emerging formats, and display advertising.


Visa Inc. said Monday it would buy Plaid Inc. for $5.3 billion, as part of an effort by the card giant to tap into consumers’ growing use of financial-technology apps and noncard payments.

More consumers over the past decade have been using financial-services apps to manage their savings and spending, and Plaid sits in the middle of those relationships, providing software that gives the apps access to financial accounts. Venmo, PayPal Holdings Inc.’s money-transfer service, is one of privately held Plaid’s biggest customers.



Most technologies are made from steel, concrete, chemicals and plastics, which degrade over time and can produce harmful ecological and health side effects. It would thus be useful to build technologies using self-renewing and biocompatible materials, of which the ideal candidates are living systems themselves. Thus, we here present a method that designs completely biological machines from the ground up: computers automatically design new machines in simulation, and the best designs are then built by combining together different biological tissues. This suggests others may use this approach to design a variety of living machines to safely deliver drugs inside the human body, help with environmental remediation, or further broaden our understanding of the diverse forms and functions life may adopt.

A computer-designed organism (CDO), with the red/green colored design from the above image, walks under the microscope.

AI methods automatically design diverse candidate lifeforms in simulation (top row) to perform some desired function, and transferable designs are then created using a cell-based construction toolkit to realize living systems (bottom row) with the predicted behaviors.

Technical paper

Kriegman, S.*, Blackiston, D.*, Levin, M., Bongard, J. (2020)
A scalable pipeline for designing reconfigurable organisms”,
Proceedings of the National Academy of Sciences (PNAS), vol. 117 no. 2, pp.?-?.

*Co-first authors.


Q:    What is the big picture here — is it just about robots made of frog cells?

A:    The big question here is: how do cells cooperate to build complex, functional bodies? How do they know what to build and what signals do they exchange to enable them to build them? This is important not only to understand the evolution of body shapes and the functions of the genome, but for all of biomedicine. Aside from infectious disease, pretty much all other problems of medicine boil down to the control of anatomy. If we could make 3D biological form on demand, we could repair birth defects, reprogram tumors into normal tissue, regenerate after traumatic injury or degenerative disease, and defeat aging (as highly regenerative organisms like planaria do). Stem cell biology and genomic editing do not, on their own, address this issue. It is still unknown what cells are capable of making besides their normal default body pattern, and these synthetic living machines are a convenient sandbox platform in which to make fundamental discoveries. We made these out of normal frog cells, with a normal frog genome. Their bodies look and act completely different from frogs, and can do so despite the fact that these “animals” have no evolutionary history of selection pressure which would have rewarded them for this behavior (they’re skin cells which have been used for millions of years to sit quietly on the surface of a frog and keep out the pathogens). Once we figure out how cells can be motivated to build specific structures, this will not only have a massive impact on regenerative medicine (building body parts and inducing regeneration), but the same principles will lead to better robotics, communication systems, and maybe new (non-neurocentric) AI platforms. The long-term goal here is to figure out how living agents (cells) can be motivated to build specific things, and how to exploit their plasticity and competency to do things that are too hard to micromanage directly (like build an eye, hand, etc.). This is also a part of a critical effort for the future of society and technology: the taming of “unintended consequences” from complex phenomena with surprising emergent outcomes – to understand how to manage swarms and collectives of active agents toward beneficial outcomes. This paper is just the first of a pipeline of studies in progress, extending the work toward applications in biomedicine, robotics, and synthetic morphology.

Q:    What kinds of biological tissues were used to build computer-designed organisms?

A:    Frog skin (green in the above image) and heart muscle (red). Both were derived from cells harvested from blastula stage Xenopus laevis embryos. These tissues naturally develop cilia (waving hairs which enable swimming), but the cilia were removed in the green/red colored organism to producing a walking (instead of swimming) organism.

Q:    How big/small are the organisms?

A:    The red/green colored organism pictured above is about 0.7 millimeters.

Q:    What have they been used for?

A:    So far we have built computer-designed organisms that walk, swim, push/carry an object, and work together in groups.

Q:    Why do these count as organisms?

A:    They “live” for about seven days, after which they stop functioning (a positive feature in terms of safety for synthetic biology constructs). Although, like vast numbers of organisms on Earth, they do not contain a brain, they exhibit functional behavior, are able to heal themselves if damaged, and work collectively. They can’t reproduce, but there are naturally occurring organisms that can’t either (e.g. mules). Synthetic living machines push biologists to develop deeper and more rigorous definitions of what an “organism” is. The question of what exactly makes for an organism (given the many colonial, syncitial, microbiome-bearing organisms are found in the natural world) is not an easy one; but these show the kind of coordinated structure and function that are immediately recognizable as belonging to a coherent organism.

Q:    Are these organisms aquatic?

A:    The organisms live in standard freshwater and can survive in temperatures ranging from 40 degrees to 80 degrees fahrenheit.

Q:    Do the organisms eat?

A:    The organisms come pre-loaded with their own food source (lipid and protein deposits) allowing them to live for a little over a week. However if grown in a nutrient rich cell-culture media, their lifespan can be increased to weeks or months.

Q:    How do computers design organisms?

A:    Computers model the dynamics of the biological building blocks (skin and heart muscle) and use them like LEGO bricks to build different organism anatomies. The behavior of each designed anatomy is simulated in a physics-based virtual environment and assigned a performance score (e.g. distance traveled). An evolutionary algorithm starts with a population of randomly-assembled designs, then iteratively deletes the worst ones and replaces them by randomly-mutated copies of the better ones. It is the survival of the fittest, inside the computer. The fittest designs in virtual reality are then selected to be built out of real biological tissues.

Q:    How do you compare the similarity of the computer’s design with the actual organism that is built?

A:    The behavior of organisms was traced and compared with the virtual design. To determine whether the organisms’ movement was a result of chance or due to the design’s evolved geometry and tissue placement, geometry and tissue distribution was altered by rotating the design 180° about its transverse plane (flipping it over onto its “back”). The shape and tissue placement of the built organism was compared using computer vision.

Q:    Why make new organisms?

A:    Nature has only explored a vanishingly small part of the space of organisms that could be assembled from biological cells. What else is out there? We need to understand (a) what kinds of structures cells are able to cooperate towards building, (b) how they decide what to make, when liberated from their normal context (how much plasticity is there), (c) what is possible without editing the genome, and (d) how complex outcomes can be manipulated rationally.

Q:    What fundamental questions does such work answer?

A:    This project seeks to determine the degree of emergent cooperation that cells can exhibit without genomic editing when liberated from the normal constraints of an embryo. In other words, what other functionalities exist in the physiological software encoded by a standard frog genome? Information about how cells make decisions during the process of assembling a new body sheds light on the origin of multicellularity, the computational capabilities of single cells, the exploitation of physical forces by evolution, the origin and specification of the default bodyplan normally built by embryonic cells, and the capacity for bioengineering new living machines with useful functions.

Q:    Why are computer-designed organisms called “reconfigurable organisms” in the technical paper?

A:    Frog cells were pieced together to form a new configuration, which is different from a frog. The cells which were configured by natural selection to become frogs were reconfigured by AI to create new forms and functions. The aggregated cells of the resulting organism can also be disassociated and reconfigured to form a new organism.

Q:    Couldn’t these constructs start evolving beyond our control?

A:    There is no evolution here: these CDOs have no reproductive organs. They simply degrade and become non-functional after about seven days. However, living organisms (those made by human-guided mating, bacteria and viruses generated by human travel and impact on the foodchain, etc.) do evolve beyond our control all the time; the best way to deal with this fact going forward is to understand it and learn to guide it.

Q:    What advantages and disadvantages do CDOs have over microrobots?

A:    Microrobots are made from metal, ceramics and plastics, so they are stronger and can thus, in theory, survive longer than CDOs. But, small amounts of metal can be very harmful for internal organs; CDOs are completely biocompatible and biodegradable.

Q:    If an AI did indeed design these organisms, couldn’t an evil (or ignorant) AI design harmful (or unintentionally harmful) organisms?

A:    An AI intent on causing harm seems unlikely, but designing organisms with unintentional side effects is a possible outcome for this technology. We thus believe that all computer-designed technologies — including organisms — require human verification before being created physically, let alone deployed to perform (hopefully) useful work. Further, regulation of such technology is an important next step in the policy space. Regardless, the potential of harm in these kinds of creations is infinitely smaller than current efforts in the virology, bacteriology, and genome editing spaces.

Q:    Couldn’t someone program the AI to design weaponized CDOs?

A:    In theory, yes. At the moment though it is difficult to see how an AI could create harmful organisms any easier than a talented biologist with bad intentions could. Despite this, we believe that, as this technology matures, regulation of its use and misuse should be a high priority. Again though, the possibility of misuse is much, much smaller than what is being done with self-reproducing agents like bacteria, viruses, and gene drives.


Pedro Canhenha

Design Literacy or the establishment of this discipline, is a designation that has become a typical expression within the arsenal of industry-laden jargon for Designers these days (on par with the lexicon which includes, Design Thinking, Omnichannel, Multi-Platform, among a few others). It’s typically associated with the process of educating and disseminating topics associated with the Design Discipline across an Organization, specifically across different teams, of different natures (including, but not limited to, Product, Development, Sales, Customer Support, among many others). The intent of this article, is to provide some reflection points, and hopefully some informed recommendations/suggestions on how to define a process by which Design Literacy is effectively done, based on past experiences, studying and observations.

Design Literacy. Wikipedia has a lengthy definition of Literacy, but I’ll quote this snippet as a form of clarification for the meaning of the word itself: “literacy is an ability to identify, understand, interpret, create, communicate and compute, using printed and written materials associated with varying contexts.” I first came to understand Design Literacy during my own Academic studies and post-college training, which I managed to deftly apply in one of my first jobs as an Educator/Teacher of Multimedia Programs and subsequently, diverse Design Software training programs. As a fundamental part of the curriculum of the courses, it was my responsibility to provide context to the Design Discipline, funneling the focus from Design as a broad subject, to Interactive Media in particular (which were primarily the courses I taught). The curriculum for these courses was rich and quite substantial, but one of the key aspects to them, was the creation of a common denominator level of understanding into the context of Design (Interactive Design), and how that was essentially being distilled into the program that was being taught. This required an obvious process of mapping out classes, and their respective content, thoroughly (both theoretical and practical aspects), always keeping in mind, that the information being shared, was aimed at an eclectic group of attendees, with different levels of knowledge towards the Design discipline in general, and the tools that were part of the courses in specific. The reason I’m outlining these past occurrences, is solely with the intent of building this analogy: context creation, education, mapping out Interactive Design training sessions, is in reality quite similar to the process that Designers currently take upon themselves in order to educate their peers in the organizations they’re a part of. I’ve been given the opportunity to work with a wide variety of companies, from large Fortune 500 to incredibly dynamic, and vastly smaller, startups, each one of them possessed of different levels of education when it came to Design, Designer roles, expected outcomes from Design-related initiatives, collaboration venues, communication processes, among many other Design related items. I worked with organizations where Design had already claimed its place, and started a process of disseminating its processes, its relationship building venues, and even in those situations, the Design Literacy topic, was always something being fine tuned and finessed. The way by which Designers educated their peers on the discipline, was in the case of these organizations, part of semi-established process, which included the utilization of tools such as Design Systems, solution driven Design Thinking processes, Design Sprints methodologies, and a variety of tools which essentially, documented and consequently, informed team members of the philosophy of the organization towards Design, and how to best utilize the outputs of that same discipline. On the opposite end of the spectrum, I worked with organizations where Design Literacy was barely existing. Design was in these cases, primarily contemplated as a service driven discipline, without much reach or depth, in terms of influence, impactful outcomes and overall healthy partnerships with other groups within the organization. In these cases, the challenge was, and has been, to create context into what the Design discipline is, how it integrates and ingratiates itself within the tissue of an organization, how it can revolutionize the narrative of Products, Solutions and Relationships with Users/Clients. Design Literacy is something that, much like any topic these days related to technology in particular, and society in general, is constantly evolving. Designers have the responsibility and duty to keep abreast of what is happening not only within their organization, but also in the sector they work in, the larger macro-economic sense of the reality of the world, the social responsibility associated with Product Design, with inclusivity, among many other factors, which permeate Design Literacy constantly, making it evolve continuously.

Reality Check. Establishing Design Literacy in an organization can be challenging, and time consuming. It’s a necessary investment, one that produces results across a variety of subjects, which include, more relevant and accomplished solutions, effective team integration (which as a side note, also implies swifter on boarding processes), brand awareness, among many others. Below are some points worth considering when tackling an endeavor such as this one.

1.Transparency — I’ve addressed the topic of Transparency in the past, but I’d like to reinstate that it’s a cornerstone of this endeavor. Design Literacy is all about being transparent and communicating with different teams, on a variety of topics, which includes definition of processes (specifically, what is Design Thinking, Design Sprints, Workshops, Research, User Interviews, Usability Testing, and the list goes on), team integration, how problems are defined, expected outcomes (of different natures, including for instance, artifacts produced by Design teams), assessing friction points (both external and internal), defining retrospective analysis (reflection on how processes have taken place and measuring their outcomes), and this list also goes on. Without transparency, there’s less ability for participation, for collaboration, for questioning, which dampens the process, warping the solutions that are created.

2.Communicate and Educate — Designers have to understand, now more than ever before, that their role has a large component tied to education, on top of the catalyst and alchemist ones. In order to be able to bring out the best of each team one collaborates with, everyone has to understand the journey they’re embarking on, and the language everyone is speaking. This means for Designers, detailing what Design Thinking processes are, discoverability processes, research processes (also topics I’ve written about previously), all neatly tied with effective documentation tactics. By documenting, by reinforcing collaboration, seeking participation, communicating expectations and requirements, Designers can successfully start educating and disseminating what Literacy is about, and how it informs the hopeful, successful outcomes of the initiatives taking place.

3.Listen — Literacy will never be achieved if Designers don’t listen. And listening comes from multiple sources, namely from clients, from internal stakeholders, peers, anyone that comes in contact with these professionals. Education is a relationship, and as such, it’s a communication, an interchangeable process by which information gets passed around, where Designers transmit, but also absorb knowledge about the tissue of the Organization, teams, and their users. The education process, the literacy that is accomplished, should never be done in a siloed context — it’s an eminently social process which requires Designers to understand the context where the organization lies, and consequently, where they’re inserted.

4.Outcomes — The output of Design Literacy can take a variety of shapes. As mentioned previously, being able to document and share what defines Design, its language, its vocabulary, its methods, is fundamental to this type of initiative. Design Systems, Style Guides, Design GuideBooks, Confluence Pages, Wiki Pages, all these different artifacts which are produced these days, are a manifestation of how this Design Literacy takes place. These are but a small outcome of this bigger endeavor, one that as the previous points urge and highlight, should be democratized across the entire organization.

Another Reality Check. Being a Designer these days is an enticing opportunity. It allows for professionals on this field to become aware of so many topics, not just for the sake of a trend or a superficial gimmick. Professionals are now empowered to understand more about the organization where they’re embedded, about the users they’re relating to, in essence, becoming powerful storytellers. These stories can only be told effectively, if we’re all understanding the plot and where we expect to be led. And nothing helps more in that path definition, than Design Literacy.

I’ll conclude with the following quote, from William Butler Yeats on the topic of education, which is one of the topics of this article:

“Education is not the filling of a pail, but the lighting of a fire.”


We’re all familiar with the current font formats (otf and ttf) that we can install on our computers and even on mobile devices. But what about web fonts? Or apps? After a long debate on Twitter, I decided to write about different font formats and their uses.

What I learned from that debate is that different things happenning in the same environment may become confusing and are regarded as being the same or similar, and different font formats are associated with the wrong use. When it comes to web, it seems that not all font authors know exactly how their work is used, and this might contribute to the paradoxical font licensing I previously wrote about.

Same environment doesn’t mean all things happening there are the same and use the same tools

Why do I say that? Well think about it this way: in a house 2 people can do 2 different things, one is cooking and one is sleeping. So these activities are performed each by different entities, they output different results, have different purposes, and need different tools. They have nothing in common other than taking place in the same house.

The same goes for things happening on a server. A server is actually a computer with some specific software installed, and computers can be used for different things that are not necessarily related between them other than using the same environment.

The example I wanted to give was about two things that happen on a server, both of which include the usage of fonts in two very different ways.

The two things are: server-side PDF generation and dynamic content generation with a CMS.

What is a CMS?

CMS stands for Content Management System, and we have the most popular example of CMS out there: WordPress. It’s a software that takes the content you write in a dashboard and creates a page with it using a pre-built template. This operation is done on the server, where the content is stored in a database.

But whether using a CMS or not, all websites need a server to be hosted on to be viewable/accessible online, and they all use the same font format — .woff2 and .woff. Woff2 is the newest version of .woff, with improved compression algorithm.

What exactly is .woff2/.woff format?

Woff stands for Web Open Font Format and is a web-exclusive font format that cannot be used anywhere else or in any other environment. It’s a compressed format created especially to be “read” by browsers that will then render the website texts in it.

It cannot be installed on a computer, and it cannot be converted back to desktop font.

You can learn about how to embed web fonts self-hosted with @font-face, from Google Fonts, or correctly include them in WordPress, with the quick guide to using web fonts.

How difficult is it to create .woff fonts?

Not at all. It only takes seconds to convert a desktop font to web format (woff). It can be done with FontSquirrel’s free webfont generator for example (though please note that your font license must allow that), or in almost all cases font creation software has buil-in option to export in different formats, including web.

No extra work from the font’s author is necessary.

What do CMS and PDF generation have in common?

Short answer is: nothing much. Other than being software that runs on a computer, nothing. Content generation has nothing to do with PDF generation.

Actually software itself is a very broad term that includes a huge amount of types, purposes, programming languages, and environments under the same generic name.

Font formats currently in use

Font format Use Installable on a computer/server



Everything that requires locally installable fonts, including branding/logo, graphic design, video production, PDF generation, server install, and mobile apps. Yes ✓


All types of websites, including websites that use a CMS like WordPress or a platform like Shopify No ×
Proprietary formats Only included in the apps they were designed for. An example would be LaTeX No ×

Logos, graphic illustrations, and banners you see on websites are images made with desktop fonts. Web fonts are used only for content text (like the one you’re reading now).

Although OTF and TTF are the standard for digital (not web), DTP (desktop publishing) and print industry still frequently use Adobe’s older standard format — PostScript. PostScript fonts have multiple extensions:

  • .pfb, .pfm, .afm (Windows)
  • .pfm, .afm, .inf (MacOS)
  • .pfa, .afm (Linux)
  • .ofm (OS/2)

The proprietary font formats are a bit beyond the scope of this article, as their use is strictly limited to the app they were designed for and were mostly created before the current standard formats.

Legacy web font formats not in use anymore

Note: EOT, TTF, and SVG (mentioned below) can still be seen on websites created a few years ago.

  • TrueDoc — developed by Bitstream in 1994, it was included with Netscape Navigator from version 4 to 6. While while not specifically a webfont format, it was the first standard for embedding fonts on web
  • EOT (Embedded OpenType) — a proprietary format owned by Microsoft and used in Internet Explorer from version 4 to 11
  • SVG (Scalable Vector Graphics) — used in old versions of Webkit-based browsers, mainly in Safari and iOS Safari but also available in Chrome 4-37 and Opera 10-24, it has been replaced by the standard .woff. This format should not be confused with the currently available svg format which is desktop installable only and can only be used in graphic software such as Adobe Illustrator.
  • TTF (TrueType Font) — While some browser compatibility still exists, this is absolutely not recommended for use, as woff format is essentially a compressed form of .otf and .ttf with additional metadata that can be read by the browser.

Legacy desktop font formats not in use anymore

Bitmap/raster — First type of fonts available in computers. You’ll recognize them as having a pixelated aspect and not being scalable. Each glyph is a bitmap (array of pixels).

There were a lot of non-standardized formats, as each operating system and software created its own:

  • PCF (Portable Compiled Font)
  • BDF (Glyph Bitmap Distribution Font)
  • SNF (Server Normal Font)
  • DWF (DECWindows Font)
  • BF, AFM (Sun X11/NeWS Font)
  • FON (Microsoft Windows)
  • Amiga Font
  • ColorFont
  • AnimFont
  • BMF (ByteMap Font)
  • PSF (PC Screen Font)
  • PK (Packed bitmap font bitmap file for TeX DVI drivers)
  • FZX (proportional bitmap font for ZX Spectrum)

We covered the idea of animating curved text not long ago when a fun New York Times article came out. All I did was peek into how they did it and extract the relevant parts to a more isolated demo.

That demo is here:

See the Pen

Selfie Crawl
by Chris Coyier (@chriscoyier)

on CodePen.

@keyframers covered it much nicer than I did and made this video. (Seriously, it’s so clear and concise — it’s a great watch.)

I particularly like how quick’n’easy creating, exporting, optimizing, and integrating the actual SVG codes in their demonstration.

Their final demo:

See the Pen

SVG textPath Animation on Scroll Tutorial | Keyssentials: Web Animation Tips by @keyframers
by @keyframers (@keyframers)

on CodePen.


You’ve planned and created your best email campaign yet and you’re excited to hit “send.”

Now, before you do, ask yourself this question, “Did I remember to include an actionable CTA?”

We don’t mean a simple “buy now” CTA. If you want your CTA to be truly effective, you must know how to write your CTAs to fit your campaign.

Common CTA strategies include “Buy now!” or “Visit today!” However, to make your CTA truly stand out, you need to stay up to date on CTA writing and design best practices and take some time to learn from outstanding, real-world examples.

How to write your CTAs: It will affect the success of your campaigns.

Each of your email campaigns serves a purpose. Without a CTA, your subscribers have nothing to act on, leaving your emails nearly useless. Having either a hyperlinked CTA or a clickable button CTA gives your readers a chance to act on something, such as:

  • Downloading a freebie
  • Clipping a virtual coupon
  • Heading over to your shop to browse

An effective CTA example from Victoria’s Secret

Source: Gmail/Victoria’s Secret

Without these CTAs, there is, again, nothing for your readers to act on, making your emails nothing more than a digital piece of information—which isn’t necessarily a bad thing. However, you won’t get the ROI you’re aiming for without an actionable CTA.

Learning how to design and write your CTAs can make or break your campaign.

Taking the time to learn how to write your CTAs and design them can make a significant difference in the overall success of your email campaign. From placement to color choices and choosing between hyperlinked CTAs and button CTAs—they all play vital roles in not only whether your readers will choose to interact with your CTAs, but whether your readers will even notice them.

That said, here are some interesting facts regarding CTA creation and use in email marketing:

  • Forty-eight percent of brands choose to match their CTA to a color that they used in their brand logo – Really Good Emails
  • Button-based CTAs can improve click-through rates by 28% – Campaign Monitor
  • Benefit-focused copy in a CTA button can increase click-through rates by nearly 10% – Campaign Monitor
  • First-person text in a CTA can increase clicks by almost 90% – Campaign Monitor

button-based CTA example by Campaign Monitor

Source: Campaign Monitor

Having a CTA in your email marketing campaign can make all the difference. However, merely slapping in a traditional CTA simply won’t do it anymore. That’s why taking adequate time to learn how to write your CTAs and how to design them is crucial.

Learning how to design your CTAs effectively

Traditionally, email designers put little thought into designing the actual CTA that was included within the body of an email campaign. For many years, this was simply left to the writing team because hyperlinked CTAs were the way to go. In many cases, these CTAs are still perfectly acceptable. For example, in the case of this welcome email from social media guru, Kelsey Chapman.

Example of a hyperlinked CTA by Kelsey Chapman

Source: Gmail/Kelsey Chapman

As we move into a new century, technology is changing, and with it are consumer preferences. That’s why it’s vital to know not only how to write your CTAs, but how to design them as well. So we’ve compiled an essential list of the most crucial CTA design best practices that you should keep in mind during your email design phase.

CTA buttons perform better than hyperlinked CTA text.

While hyperlinked CTA text is still a viable design option, brands have noticed that consumers prefer a clickable button CTA over a hyperlinked CTA. In fact, during our own research, we found that simply adjusting our CTA in one campaign from hyperlinked text to a clickable button increased our overall click-throughs by 127%.

CTA button vs. hyperlinked text example

Source: Campaign Monitor

Make sure your CTA is clearly identifiable.

One reason why consumers prefer clickable CTA buttons is that they’re much easier to find than hyperlinked text options. Unfortunately, while using a hyperlinked CTA is still common practice, many brands leave the text in the same color as the rest of the email text. This makes it nearly impossible to identify quickly.

Here’s the thing: Only a handful of your readers are going to take the time to read your email. The rest are going to scan for important information, including the CTA button. If it’s not easy to spot, then your readers are going to move on without a second thought.

CTA placement is vital.

Since more consumers are spending time scanning their emails for relevant information, it’s vital to consider the placement of your CTA within the body of your email. While many brands include their CTAs at the end of the message, you want to place your CTA above the fold.

Above the fold means within the first viewing window your readers get after opening your message. The more scrolling a reader has to do, the less likely they are to find and click on your CTA.

Example of a CTA placed above the fold

Source: Really Good Email

Learning how to write your CTAs effectively

Now that you’ve gotten a chance to review some CTA design best practices, adopt the same philosophy into how you write your CTAs to get the most out of each campaign.

Always include action-oriented text.

Remember, the entire point of your email marketing efforts is to drive action. The most effective way to do that is by always including action-oriented text within your CTA. Popular action words for CTAs include:

  • Try
  • Buy
  • Get
  • Order
  • Reserve
  • Download
  • Add
  • Sign up
  • Register

 Examples of CTAs with actionable text

Source: Self-made

Avoid “friction words.”

While you want your CTAs to be actionable, you also want to make sure you avoid the use of friction words. Friction words are either words or phrases that imply your reader must do something that they may not really want to do. Some common friction words that are traditionally used in email marketing CTAs include:

  • Submit
  • Order
  • Download

While these are all actionable words, they tell the reader what to do instead of encouraging them. Here are various ways you can alter your CTAs to include frictionless words:

  • Download – Get
  • Order – Reserve
  • Apply – Learn

Example of an actionable, frictionless CTA from Breguet

Source: Really Good Emails

CTA text should be both large and legible.

When designing your email CTA, we mentioned that you have to make it easily noticeable. The most effective way to do that is by making sure your text is both legible and big enough to stand out. However, that doesn’t mean you want to make it obnoxious.

Take this example from Resy. Their CTA is very legible, thanks to the font and coloring they chose during the design phase. They took it a step further by choosing to bolden the text. Notice, however, that it doesn’t look clunky or out of place.

 Example of a bold, short and sweet email CTA from Resy.

Source: Really Good Emails

The best way to make your CTA bold and legible is by choosing a font that matches your text hierarchy. To do this, choose something similar to the fonts that you used for your heading text.

Keep CTA text short and sweet.

Along with having a bold, legible CTA comes one that’s both short and sweet. At this point, your reader should already understand the benefit of clicking on your CTA, so you want to keep the text short and simple. Ideally, your CTA will only be 3-5 words in length. Anything more than that begins to look too messy.

Short and sweet email CTAs in action

Source: Gmail/Chewy

First person/personalization goes a long way in your CTA.

Now, adding first person into your email CTA doesn’t have to be complicated. In fact, it can be as simple as saying “Reserve my seat” instead of “reserve a seat.” Studies have shown that simply changing this one word in a CTA can increase clicks by nearly 90%, a number that warrants consideration.

If first-person doesn’t really sound right in your mind, then simply personalizing with the second-person point of view works great too. So, instead of “reserve a seat,” you can opt for “reserve your seat.” This added touch of personalization makes your call to action that much more inviting to your subscribers.

Example of a personalized CTA by PlayStation

Source: Really Good Emails

Wrap up

Knowing how to write your CTAs is a vital part of your email marketing process. Again, if you want to see the ROI from this marketing strategy, you have to give your email subscribers something to do.

When it comes to your email CTAs, you’ll want to keep in mind some of the design and writing tips we’ve minted, including:

  • Include action-oriented text
  • Avoiding “friction words”
  • Using a button vs. hyperlinked text
  • Keeping it short and sweet

Looking for a little more guidance on how to write and optimize your email CTAs? Then make sure you check out our email CTA optimization guide today.


Digital colors, reimagined

Talking about colors is hard without context. That’s why we provide a dedicated page for every color code. Use it to find matching color variants, learn more about color properties or convert color values.

The sum of its parts

Every color contains an individual composition of basic colors. Forget everything you learned about RGB or HSL sliders and explore a new way to modify digital colors.

Endless possibilities

Still looking for the right color? Compose new colors out of existing ones. If you blend a color with black or white you can even create the corresponding shades or tints.

Your palettes, your style

Use your favorite colors to build up a personal color library. Create project-specific palettes and share them with others. Need to document complex color collections? Add structured text content and turn your palettes into living style guides within seconds.


We have all joined more design-themed Slack, Linkedin and Facebook groups than we can keep up with — maybe in response to the visceral human need to feel like part of something bigger than ourselves. But the reality of online communities is quite different from what they initially seem to promise. Groups with thousands of designers either become inactive once members realize they have little in common, or remain active but end up devolving into an endless stream of self-promotion and content marketing pieces. Discussion threads on Reddit or DesignerNews don’t delve deep enough into a topic because they are held back by miscommunications between participants. Design Twitter slowly becomes a shallow stream of polarizing, angry, and loud voices.

While large online communities play an important role in making design more accessible to more people, we have to re-focus on the smaller communities we build ourselves in order to get the full value out of our conversations.

All this doesn’t mean designers have stopped having online conversations with one another; it just means these conversations are migrating to a new type of community which is more intimate and focused. They are happening over WhatsApp, Telegram, direct messaging, and super-niched hubs. They are happening one-to-one or in small groups, rather than in large forums.

Designers are informally creating their own sounding boards: people with whom they feel comfortable sharing feedback, exchanging design references, discussing trends, or asking for advice on topics like salary, work dynamics, and career.

The same shift can be seen with design events. While large design conferences are a great platform for networking, small local meetups are more rewarding when it comes to learning and development, since they allow participants to engage in more real and honest question-and-answer sessions.

“The intimacy of smaller settings allows people to open up to each other in more authentic ways,” explains Kat Vellos, Senior Product Designer at Slack and founder of BayAreaBlackDesigners. “Smaller groups make it easier for participants to build psychological safety with each other. That’s much harder to do in a large room with hundreds or thousands of people. Psychological safety is the most important thing for getting people to trust each other and gel, and small groups/events will always be able to provide this in a more manageable way than humongous conferences.”

In 2020, the most relevant discussions in design are becoming local, authentic, and focused. Large communities become primarily a way to find and build smaller ones. In a world where everyone is shouting at each other, quieter and more thoughtful conversations become incredibly precious.


Trends in storytelling generally stem from media and entertainment. Once these early adopters demonstrate success, edgy B2C brands borrow with pride. Then, B2B players catch the wind and test out the tactic for themselves.

And so it goes with episodic storytelling. Streaming services and premium television publishers are purposefully presenting stories as episodes to keep audiences coming back for more. It works for Disney . It works for B2C brands. And, as more marketers are beginning to realize, it works for B2B brands.

Episodic content is a familiar tactic among audiences and drives tremendous engagement. You need to:

  • Deliver high value in a shorter than usual timespan. You want your audience to think, “That was interesting, and it didn’t take too long,” and they’ll be willing to come back for more.
  • Provide a clear call-to-action at the end of the series. If you have asked someone for this much of their time, and they have cared enough to give it, be clear on what should happen next.
  • Think about why you love your favorite series. You can’t get enough of “The Mandalorian” because you are invested in the characters. A nonfiction narrative usually works better than a made-up story for B2B brands, but you can still use storytelling best practices. Establish strong character bonds so people are invested, and end episodes with unanswered questions – cliffhangers – but don’t forget to resolve them quickly.

Episodic storytelling could mean the continuation of the same plotline, or serial storytelling link “Black Mirror,” in which each episode introduces new characters. Either way, quality, character-driven storytelling will reap the best results. Tips for success include:

  • Let go of fear! Trust that your plot and your characters will deliver real value.
  • Learn from B2B brands that are doing this right. MailChimp’s animated series, “Outer Monologue,” Intel’s drone and AI series, “Preserving a Legacy” and Zarius’ “Marketing Unboxed” are good examples.
  • Think beyond video. Episodic storytelling works in virtually all mediums, including blog posts and podcasts. In fact, B2B podcasting will explode in 2020.

All brands will need to morph into media companies if they want to survive. B2B marketers may be hesitant to try episodic storytelling, but I have seen content fail because the story was delivered all at once when it would have been better told in installments. Audiences like episodic storytelling. It is time for B2B marketers to master it.

Soapbox is a special feature for marketers in our community to share their observations and opinions about our industry. You can submit your own here.

Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.

About The Author

Erin Craft is vice president and managing director of strategic creative at Centerline Digital. She double-majored in business management and microbiology, so, naturally, she took a year off after college to focus on photography and work with horses. When she decided field research wasn’t for her, she found the perfect place for a scientific mind who admired a well-told story at Centerline. Over a decade later she continues working to master marketing at the intersection of technology and humans. Erin relishes the daily opportunity to switch between right- and left-brain approaches to solve communications challenges from a variety of different angles. With her unique blend of experience, an eye toward creativity, a tight grasp on budgetary concerns and old-fashioned common sense, she guides accounts to measurable success.


Salesforce has updated its commerce APIs and added a Mulesoft Accelerator to its Commerce Cloud for faster integrations. It also announced a new dashboard to track performance for its Einstein AI-powered recommendation system and released a Salesforce Order Management solution for e-commerce teams on Monday.

The announcements came during the kick-off for NRF 2020, the National Retail Foundation’s annual conference.

New Einstein AI dashboard. The new dashboard within Cloud Commerce reporting shows how Einstein AI-powered product recommendations are performing within a company’s storefront platform.

“Einstein AI dashboard provides near real-time metrics so merchandisers have actionable information the need across sites, storefront pages and custom date-ranges,” Salesforce said.

Improvements to commerce APIs and Commerce Cloud. Salesforce’s updates to its commerce APIs and Commerce Cloud solution are designed to help shorten the production process for custom commerce apps built on Salesforce’s platform and aim to help marketers and e-commerce teams, “Reach shoppers at any touch point.”

Salesforce added the Mulesoft Accelerator to Commerce Cloud to “jump-start” commerce solution integrations. The Mulesoft Accelerator comes with pre-built templates for enterprise resource planning (ERP) and product information management (PIM).

A Salesforce Order Management platform. Salesforce said the new solution “bridges the gap” between physical and digital channels, connecting and automating fulfillment and customer service processes.

The platform works with Salesforce’s mobile POS partners Mad Mobile, NewStore and PredictSpring.

A community-driven initiative for developers. Salesforce has also launched a Commerce Cloud Developer Center to create a “community” for those building e-commerce APIs. The center will provide resources, best practices and a way for API developers to engage with others in the field. It will also include developer toolkits and sample apps.

Why we care. Salesforce’s latest updates are aimed at providing more holistic commerce martech solutions, from the Order Management platform that offers a unified view of the commerce experience to enhanced commerce APIs to faster app development and easier integrations with Commerce Cloud and finally to the additional layer of e-commerce analytics with the Einstein AI dashboard.

About The Author

Amy Gesenhues is a senior editor for Third Door Media, covering the latest news and updates for Marketing Land, Search Engine Land and MarTech Today. From 2009 to 2012, she was an award-winning syndicated columnist for a number of daily newspapers from New York to Texas. With more than ten years of marketing management experience, she has contributed to a variety of traditional and online publications, including MarketingProfs, SoftwareCEO, and Sales and Marketing Management Magazine. Read more of Amy’s articles.


In June 2017, about 100 employees of Apple Inc gathered at the company’s headquarters at 1 Infinite Loop, Cupertino, to hear a presentation designed to scare them witless.

Staffed by former members of the National Security Agency and the US military, Apple’s global security team played video messages from top executives warning attendees never to leak information.

“This has become a big deal for Tim,” Greg Joswiak, Apple’s vice president of marketing, said at the time. “I have faith deep in my soul that if we hire smart people they’re gonna think about this, they’re gonna understand this, and ultimately they’re gonna do the right thing, and that’s to keep their mouth shut.”

A secretive culture – bordering on paranoia – was first fostered by Steve Jobs, the founder of Apple, and then by his successor Tim Cook, who took over in 2011.

Apple employees typically sign several non-disclosure agreements (NDAs) per year, use codenames to refer to projects, and are locked out of meetings if they fail to obtain the appropriate documentation, former workers told us.

“Secrecy is everything at Apple,” one ex staffer said. “Many employees don’t like Apple Park [the company’s new headquarters] because it has very few private offices. Confidentiality on projects and the ability to step behind a closed door is vital.”

Another recent ex-employee said that security was weaponised across the company, with internal blogs boasting about the number of employees caught leaking and NDAs required even for non-sensitive or mundane projects. The employee described how they were once asked to read a negative story about the company and then identify the Apple insider suspected of leaking information.

Since becoming chief executive, Cook has doubled down on security, catching 29 leakers in 2017 alone, according to an internal memo leaked to Bloomberg in 2018 (the company does not publicly disclose such figures).

Yet Cook has also radically shifted Apple’s priorities, sometimes in directions that his predecessor would not have understood or condoned. Understanding what has changed at the company in the 3,015 days since Jobs died of pancreatic cancer is arguably more critical to understanding Apple in 2020 than identifying what has remained the same.

Since 2011, Cook, a quietly spoken 59-year-old from Alabama, has built Apple into the largest tech company in the world, with a market valuation of more than $1 trillion. More than two thirds of that value was accumulated after Jobs’ death.

Steve Jobs (left) and Steve Wozniak in 1977, launching the Apple II computer

He has achieved such stellar growth partly through the sale of iterative updates to Apple’s flagship iPhone and the launch of new products such as the Apple Watch. Even though iPhones continue to drive more than half of Apple’s revenue, sales are sputtering as the smartphone market reaches maturity. So Cook is spearheading the company’s biggest shift in more than a decade: a switch away from making devices to providing services that touch almost every part of our lives.

From Apple TV to Apple Music, from Apple Pay to Apple News , Cook’s company is now the gateway through which millions of us live our lives. We watch movies, pay for groceries, read the news, go to the gym, adjust our heating and monitor our hearts through Apple services, which is now the company’s fastest growing division.

Living within this carefully curated ecosystem, soon to be bolstered by new augmented reality products, the company’s 1.4 billion active users have become less like customers and more like citizens. We no longer just live our lives on Apple’s phones, but in them.

Apple’s market valuation is roughly equal to the national net worth of Denmark, the 28th wealthiest country in the world. It has as many users as China has citizens. Its leader has a close relationship with the US president and other heads of state. In all but name, this is a superpower, wielding profound influence over our lives, our politics and our culture.

That’s why Tortoise has decided to report on Apple as if it is a country: the first instalment in a year-long project we are calling Tech Nations, which will cover all the main technology giants. Here, we’ll examine Apple’s economy, its foreign policy and its cultural affairs. We’ll dig into its leadership, its security operation and its lobbying spend. We’ll identify the executives likely to succeed Cook, and the areas where Apple is falling behind in the global tech race.

As Jobs might have put it, we’re trying to “think different” about the small computer company founded in a Los Altos garage back in 1976.

We have learned:

  • Apple is now unmistakably Tim Cook’s company. The 59-year-old has built an organisation radically different to the one left behind by its founding father, Steve Jobs.
  • Cook’s Apple resembles a liberal China. It is devoted to enabling individual creative expression, but on its terms: it has become a highly centralised, hierarchical and secretive state.
  • Cook’s Apple is defined by a corporate vision, rather than product innovation. Apple has a written constitution. Since Cook assumed power, he has fundamentally changed how Apple deals with suppliers, acts on the environment, engages in politics, produces and promotes cultural content, all while making privacy and security part of the brand.
  • Apple’s emphasis on privacy was dramatised by its refusal, in early 2016, to help the FBI unlock the iPhone of one of the San Bernardino terrorists. Back then, this put the company on a collision course with the state; prompting the question: who sets the rules? Now, after Facebook’s involvement in the Cambridge Analytica scandal, Apple is on the same side as lawmakers who increasingly want to deal with privacy breaches.
  • Apple’s move towards services is risky. Former employees, experts and external partners told us that the company’s focus on excellence was often not obvious in software-based products, such as Apple Music or Apple TV . The departure of Sir Jony Ive has given Eddy Cue, the head of Apple’s services division, and Jeff Williams, its chief operating officer, increased prominence.
  • To make Apple TV a “Netflix killer”, the firm is entering into “crazy” deals which have helped inflate the price paid to actors and directors, studio insiders said. Two executives told us that the stars of Apple’s flagship Apple TV show, The Morning Show, Jennifer Aniston and Reese Witherspoon, had been offered the entire rights back after around 10 years. Netflix, by contrast, keeps rights for life.
  • Autonomous cars and augmented reality will form a big part of Apple’s future. Apple patents we’ve seen envisage facial recognition data combined with in-car software identifying pedestrians by name – a development which could provoke privacy concerns.
  • Apple is falling behind in the race to harness artificial intelligence. Compared to Google and Facebook, Apple neither collects as much data as its competitors nor has the resources to exploit it as effectively. It is trying to change this picture by hiring top executives and buying up at least one AI company per year since 2014.

The People’s Republic of Cupertino

It may well be the most expensive metaphor ever built. In Cupertino, California, several sweaty miles away from Santa Cruz, is Apple Park, the headquarters of the tech company that has its logo on a billion iPhones. There is an actual park there, where employees can walk or cycle between the meticulous groves of apple trees and around a circular pool, although anyone else would be lucky to see it. It is surrounded by a great, glass, multi-storey ring of offices created by the architects Foster Partners with help from Apple’s outgoing chief design officer, Jony Ive. The ring is a mile in circumference. It cost $5 billion.

Apple can afford such architectural extravagance. It is the company that stood against the tide and turned it; helping us all to realise that computers could be more than just beige boxes, that headphones could be white, that telephones can do everything. Its decisions have defined our digital – and daily – lives.

The Steve Jobs Theater, at the heart of Apple Park

But what is Apple Park a metaphor for? High-minded Apple enthusiasts might say that the ring represents an endless cosmic loop. Or perhaps it is a planet-scale equivalent of the circular home button on earlier models of iPhone. Visiting aliens can click it from space – and go home.

The truth, however, is that it represents what Apple has become: a secret garden with tremendously high walls. Most people who try to peer over the edge are summarily pushed back. Apple is a part of the world but also apart from it. It is Maoism for individualists.

The development of the Macintosh computer, released in 1984, is a revealing origin story for Apple. Jobs had assembled a crew of “pirates” to build a computer as he wanted it, which meant attractive design, a symbiosis between hardware and software, and, most of all, control – of the consumer, by him.

In a hundred small ways, he made the Mac immutable and inescapable. Its elegant contours were actually hard borders, held together by special screws so that bedroom hobbyists couldn’t get inside with their regular screwdrivers. Requests to license out the operating system (so that it could be used on other computers) were refused or ignored. The Mac would be an ecosystem unto itself. People would have to buy into it entirely, or not at all.

Jobs was forced out of Apple for his hubris; then reinstalled in 1997, when the company was on the verge of bankruptcy. With Jony Ive at his side, and until his death from pancreatic cancer in 2011, he introduced a series of products that were like the original Mac in spirit yet incomparably more successful: the iMac, iPod, iPhone and iPad.

Jony Ive (left) and Tim Cook inspect the iPhone XR during an event at the Steve Jobs Theater in September 2018

Against that record, it is easy to dismiss Jobs’ successor, Tim Cook, as a button-down bureaucrat. Whereas Jobs’ Apple was about an idea – Think Different – Cook’s, his critics say, is more about a number, a market valuation of $1 trillion or more. Those critics also argue that the new Apple is less innovative as a result. They point at the Apple Pencil, a stylus introduced in 2015 to supplement the iPad, and set it against one of Jobs’s typically pugnacious speeches from 2007. “Who wants a stylus?” Jobs asked then. “Nobody wants a stylus.”

Yet Cook has made some defining interventions. Other companies, such as Facebook and Google, are happy for a sort of chaos to prevail: an online world that’s sprawling, messy and mostly unregulated, where data can be plucked from the air and passed on to advertisers. Cook is trying to create a refuge: a unified world of hardware, software and services, all under Apple’s flag, where citizens can expect their data to remain their own.

Two of the company’s most significant recent releases are Apple Arcade, a subscription gaming service for iOS devices, and Apple TV , a Netflix competitor. Executives such as Eddy Cue and Jennifer Bailey, both of whom work on the services side of the company, are now regarded to be as influential as the departing Ive once was. Much like China, Apple is shifting from being a manufacturing economy to a service-based one.

Jennifer Bailey, one of the people leading Apple into the realm of services

At the same time, Cook is doubling down on privacy and security as a differentiator from his competitors. That shift was most obvious in 2015, when the company refused to assist the FBI in unlocking the iPhone of one of the San Bernardino terrorists. It is clear, too, in the company’s latest advertisements, which are created across an in-house team and a dedicated set of people at the external agency TBWA. “These are private things. Personal things,” says one recent video promoting the iPhone and its data protections. “And they should belong to you. Simple as that.”

There is a sense of necessity, even of wisdom, about these shifts. After all, consumers have become less willing to pay out for iteratively improved phones, so new ways of making money from the phones they already have must be found. The idea is to expand the Apple ecosystem so far that consumers never need to – or never can – leave it.

But this is undeniably risky terrain for Apple and Cook. The economics of services, and particularly of content creation, are very different from those of hardware. This was demonstrated by the almost simultaneous launches of Apple TV and its competitor service Disney in late 2019. Apple spent a lot of money on its shows, hiring famous actors and filmmakers, but the critical and popular reception has been lukewarm at best. Disney, having spent no less money, was also able to call upon a wide range of old favourites and newer franchises, such as The Simpsons, Star Wars and Marvel’s cinematic universe – and is succeeding accordingly.

Apple’s traditional approach has been to make products that feel distinctively Apple and that are, at least in part, desirable because of that. But distinctiveness and desirability are harder to pin down when it comes to the shows that are being made for Apple TV . What can Apple do that Netflix or the BBC cannot? Can it be different, or, for the first time, will it just be the same?

“I honestly don’t know how they will distinguish themselves from Netflix,” one studio executive told us. “When Apple TV was launched, it was surprisingly light on content. There was no archive, no back catalogue.”

And there are other risks facing Apple, many of which are of Cook’s own making. Its emphasis on privacy, while laudable, lays it open to the charge of hypocrisy: third-party iPhone apps have already been found spreading data in ways that contravene Apple’s declared ideals. Meanwhile, its main manufacturing base is a country – yes, China – that has become the frontline in an ongoing trade war, and a war over free speech and censorship.

In China, too, Apple is being outpaced by companies like Huawei – and this has an effect on its bottom line. Although Apple’s sales revenues are still monumental, at $260 billion in the year ending September 2019, they are lower than those achieved in the previous year.

When Apple was founded, it was a riposte to the dominant, mainframe thinking of the grand dame of American computing, IBM. But now, over 40 years later, it is a titan itself; it can no longer rely on or represent the shock of the new. Life in Cook’s empire is certainly more prosperous now, but it is also less certain. Behind that futuristic-looking ring in Cupertino is the biggest secret of all: this is a company in the grip of a mid-life crisis.


Reporters: Peter Hoskin and Alexi Mostrous

Editors: Basia Cummings, David Taylor, James Harding

Graphics and design: Chris Newell

Additional research: Ella Hollowood

Picture editor: Jon Jones

All pictures: Getty Images


The field of data visualization has become a tussle between accuracy and beauty. In one corner, designers say that data is fungible as long as the presentation is eye-catching. In the other corner, statisticians argue that clarity should rarely be sacrificed in the name of novelty or entertainment.

The latest AIGA Design Census is a vivid illustration of this skirmish. Published by the oldest and largest professional design organization in the US, the report—based on an industry survey—contains some valuable insights about the country’s creative sector, but some argue that the findings are obscured by the report’s “very bad” data visualization.

Consider this chart from the report denoting the years of experience of 9,000 respondents.


The rows of bright yellow bubbles may be more interesting to look at—they really pop against the salmon-colored background—but it makes it difficult for readers to make comparisons between shapes. A simple bar graph, like we made below, better conveys the information.


In a table showing work satisfaction versus type of benefit received, respondents are represented via clusters of dots that vaguely resemble bacteria in a petri dish. These organic forms are pretty but, again, statisticians would argue that simple bar graph would have been the most efficient visual.


“The emphasis is not on how to communicate data as clearly as possible but on a kind of visual cuteness,” observes Stephen Few, a data visualization expert and author of the book Show Me the Numbers: Designing Tables and Graphs to Enlighten, after reviewing the report. “Many charts in this publication that ought to be the same kind of graph are displayed in different ways for the sake of variety.”

Antony Unwin, professor of computational statistics and data analysis at the University of Augsburg says, “it’s very disappointing. I would expect something better from such an august body.” Asked how he might fix some of the more perplexing graphics, Unwin decries, “there’s nothing I can ask them because we’re on different planets.”

Accurat, the design firm that developed the graphs for the AIGA Design Census, says that its concern is winning the audience’s attention—even to the point of initially puzzling them. “I feel there’s a value in creating a dynamic presentation of data because capturing the attention of the audience is as important as communicating the data properly,” explains Gabriele Rossi, Accurat’s co-founder.

With offices in Milan and New York, Accurat takes the stylistic component of information design to point that one of its cofounder’s work has been featured in a fashion line.

Though Accurat also produces fairly straightforward graphics for clients such as IBM, Deloitte, and the Bill & Melinda Gates Foundation, Rossi characterizes the AIGA industry report as more of a marketing document aimed at designers. There’s an underlying assumption about this approach: it suggests that designers need to be entertained in order to be informed.

Two takes on data visualization

The fundamental disagreement between designers and statisticians isn’t rare, nor is it new. In 2012, Unwin and fellow statistician Andrew Gelman wrote a paper that defined the issue:

Outside of statistics, infographics is huge, but their purveyors and enthusiasts appear largely to be uninterested in statistical principles… We worry that designers of non-statistical data graphics are not so focused on conveying information and that the very beauty of many professionally-produced images may, paradoxically, stand in the way of better understanding of data in many situations.

Rossi of Accurat tells Quartz the difference in approach is a result of having different communication goals. If statisticians see data visualization as the pursuit of the clearest and most correct translation of numerical data into graphical form, designers understand that a chart can at times be a tool in a bigger marketing or branding scheme. It’s essentially the difference between a court transcription and an interpretative dance.

“We needed to get traction on social media,” explains Ross. “Will you really share 10 images on Instagram if they all look the same? If you do, it’s probably not going to be a very savvy social media strategy.”

Like other forms of graphic design, experimenting with fonts, shapes, and grids is a way of avoiding visual fatigue, Rossi says. “For theater posters for instance, of course black and Helvetica [font] on white background will always be readable but would you notice it on a subway? In the same way, would you notice a plain and proper bar chart with very grid lines in place, as academics recommend it?”

Not all untraditional charts are ineffective. One such example in the AIGA Design Census is a chart showing the distribution of time at respondents’ current job for different types of employment. Data viz researchers might scoff at the chart’s missing y-axis and would prescribe bars over curved shapes. Still, it is beautiful and clearly shows that self-employed workers tend to stay in their position longer than those who work for a company.


Statisticians believe that data visualization is about communicating information efficiently. If done right, readers can quickly perceive patterns and relationships in graphical format that would be more difficult to explain through text. In reality, it’s often more complicated than that—especially in today’s attention economy. To keep a reader’s interest, visualizers not only have to think of conveying information succinctly, they have to conjure a way to make it visually appealing and exciting. Bar charts and line charts, though effective, are boring to some people.

Influential information designer Nigel Holmes grappled with this tension when he introduced a new approach to graphics for TIME magazine in the 1970s. Working under art director Walter Bernard, he created bar graphs on horseback, droopy charts on hospital beds, and many other goofy graphs that purists like Yale University information design pioneer Edward Tufte, might call “chartjunk.”

Mag Men / Columbia University Press

Over the top? Nigel Holmes’s graph for TIME Magazine, 1979.

In the new book Mag Men: Fifty Years of Making Magazines (Columbia University Press), Holmes describes toeing the line:

From time to time I overstepped the mark and illustration got in the way of the numbers. All I wanted to do was help people become interested in the subject of the articles. I spent a lot of time talking to the writers, who helpfully fed me metaphors that I could work into the charts. If I could get readers to smile, I was at least halfway to helping them understand.

After a few years, I felt that perhaps some of the charts had gone too far, so I calmed the illustration down a bit. That led to another round of critical mail: ‘So now we are back to boring charts again?’ A funny thing: after many years of changes in style, I still get requests for the lighter touch I’d used at Time (I’m happy to oblige). The point is the same as it always was—to engage readers.

Terrible charts in the wild

Accurat’s sometimes befuddling graphs for the AIGA Design Census are a minor event compared to the “bad charts” that have become a mainstay of business communication.

A classic example of an inefficient, but well-loved, graphic is the world cloud. A world cloud is a cluster of words associated with a key concept, with the most frequently used words rendered in a larger font size and displayed close to the center of the graphic. Data experts agree that this is a horrible way to convey frequency.


An example of a word cloud used on Salesforce’s blog.

Yet data visualizers are often asked to make word clouds to summarize a politician’s speech or a company’s report. Why do otherwise brilliant writers and editors want such an ineffective chart? Because they think it looks cool, and it’s different. And if a word cloud gets someone to read a report or article they wouldn’t have otherwise, can we really call it ineffective?

Another example is the pie chart. Reviled by serious data experts, a 1984 study outlined how people tend to underestimate the size of acute angles (90°).  But pie charts persist because people like circles more than rectangles.


Bad pie.

There is a large body of scientific research literature on the most effective types of charts vis-a-vis the information. This research suggests that people are better able to compare differences in length than differences in area, and can more accurately assess the area of a square better than the area of a circle. Nearly every statistician would recommend a bar chart over a pie chart. But even experts who know this basic rule, like Accurat, will often ignore it—and it can be a good choice.

An interactive visualization of Barack Obama’s 2013 budget proposal in the New York Times, for instance, is informative and playful. There are aspects of this visualization that don’t follow the rules, particularly the use of circles rather than distance to convey differences in spending amounts.

These examples demonstrate the fundamental conflict in  data visualization: Finding a balance between ingenuity and comprehension; visual acrobatics and accuracy.

Sticking points

The irony that a design organization like AIGA would publish a report with methodologically “poor” data visualization isn’t lost on critics. After all, graphic designers are often tasked with creating graphics for reports, publications, and presentations.

Few, the author of Show Me the Numbers, argues that there’s something greater at stake when such lapses go unchecked. “This is a publication from an organization that ought to understand this stuff, right? You would expect them to be leaders in this field to some degree,” he says. “This kind of publication just gives designers permission to continue doing it poorly.” Few suggests that we might see an overall improvement in charts if graphic designers involved in making charts studied some basic statistics and statisticians, in turn, learned design principles for better-looking graphs.

Accurat’s Rossi, for his part, underscores the different pressures that plague professional designers compared with academics. He admits that his firm could’ve done a different job with AIGA’s graphs had they been given more resources. “It’s just that you also have to imagine that this is a non-profit project than we did in our spare time, with a fourth of the budget,” he says. Rossi also mentions that he had to work with creative parameters from Google Design, which co-sponsored the project with AIGA. “As professionals, we need to do in two days what an academic researcher might do in two years,” he says.

For Few, among the the most egregious aspects of the AIGA Design Census visualization strategy is that readers are compelled to figure out how to read every chart. Variety, in this case, becomes a stumbling block. “If you use various types of graphs to show exactly the same thing, you are forcing people to have to relearn how to read the graph every time they come to a new one,” Few points out. “There’s no reason to put people through that learning curve when they’re reading a report.”

But Rossi says this, in fact, is by design. “We stimulate curiosity by asking the reader to figure out how the chart works,” he says.

In many ways, the fundamental disagreement between statisticians and designers reflects a shift in thinking about what constitutes good design. There was a time when the universal goal was to create seamless flow. Designers were charged with eliminating all obstacles, anticipating the mindset of users along the way. In the popular design reader, The Design of Everyday Things, design guru Don Norman writes that “when the point of contact between the product and the people becomes a point of friction, then the industrial designer has failed.”

In recent years, user-experience experts have begun to change their tune. They realized that a seamless flow of information and tasks results in passive, unmindful customers. Accurat’s  intentionally imperfect chart-making underscores the importance of friction to invite participation, curiosity, and engagement—for better or worse.


You have worked hard to create that perfect UI.

Don’t let spelling mistaks ruin it.

Try SPELLL for free

Used by 1700 designers from these top companies and more

Get SPELLL & join them

Give Figma spelling and grammar checking superpowers

With a single click, you can make your Figma file free of mistakes by

  • Fixing every instnce of a typo
  • Ignoring every instnce of a typo
  • Adding words such as your brand name (SPELLL) to your dictionary so they won’t get reported as typos

Get SPELLL & live that typo free life

change instnce to

2 occurences

Supports 43 languages from around the world

Asturian, Belarusian, Breton, Catalan, Catalan (Valencian), Chinese, Danish, Dutch, English, English (Australian), English (Canadian), English (GB), English (New Zealand), English (South African), English (US), Esperanto, French, Galician, German, German (Austria), German (Germany), German (Swiss), Greek, Italian, Japanese, Khmer, Persian, Polish, Portuguese, Portuguese (Angola preAO), Portuguese (Brazil), Portuguese (Moçambique preAO), Portuguese (Portugal), Romanian, Russian, Simple German, Slovak, Slovenian, Spanish, Swedish, Tagalog, Tamil & Ukrainian.

Get SPELLL & use it with your language


  • Unlimited usage for 7 days
  • After trial limited to Figma files with 100 text layers or less


  • 7-day money back guarantee
  • Unlimited usage
  • License for 1 user
  • Fast and premium support

One time payment

Pay once and use it forever

  • 14-day money back guarantee
  • Unlimited usage
  • License for 1 user
  • Fast and premium support


  • 14-day money back guarantee
  • Unlimited usage
  • License for 1 user
  • Fast and premium support

What is that? You use Sketch/XD/Studio?

I am working on bringing SPELLL to these design tools too! Join the waitlist and get early access.


SEO or Search Engine Optimization is rewarded as a technique to make a website popular on the internet. So, why you need to make a website popular? Well, there is no use of having a website that hardly receives any visitors. When you have a website, you should be looking forward to welcoming more visitors. Now, different sorts of websites are there. There are websites to feature personal opinions, tips for readers, reviews on products, and general information based web platforms. There are certain websites which intend to sell products or services. Any website would love to enjoy popularity and increasing counts of web visitors. For that reason, one needs to opt for search engine optimization.

White Hat and Black Hat SEO

When it comes to search engine optimization, different webmasters have different kinds of techniques to feature. Traditionally, it is said that ethical methods for search engine optimization should be followed. Now, what are the ethical methods? Generally, it is perceived that when a campaign follows the Google guidelines, it remains ethical with the business marketing process. Such type of SEO is known as white hat SEO. When webmasters follow unethical ways to promote a website, it is generally known as black hat SEO. So, most of the business marketing expert would like to avoid black hat SEO.

Why White Hat SEO May Not Be Effective?

Every SEO campaign should be judged by its effectiveness. It is important to go for ethical business marketing, but there is no use of marketing if it is not fetching you any significant result. So, conventional white hat SEO would not be enough to fetch excellent result. You need to opt for creative search engine optimization techniques which can fetch the right results for your online-based business. The ultimate aim should be getting more visitors, and that can only enhance possible business revenue. More visitors mean more potential buyers. So, your business will start earning higher revenue with an increased number of visitors on the website.

  1. Breaking the Stereotype Is Important

When you follow white hat techniques too religiously, you actually miss out a key thing, and that is nothing but creativity. In business marketing, creativity has been regarded as the key driving force. Creative advertisements always clinch the attention of potential buyers. So, mixing creativity with an SEO campaign is absolutely important to make the whole campaign more attention-grabbing. When white hat SEO techniques are followed, webmasters follow only a few techniques repeatedly. There will be no creativity. If you want to draw the attention of others, you need to break the stereotype. You need to explore unique methods to make your business popular through a search engine. So, the conventional white hat SEO campaign often does not fetch the desired result due to lack of creativity.

  1. White Hat Comes with No Short Term Plans

Business marketing or SEO experts, who use to follow white hat SEO, would often opine that this is long term business marketing plan. This is surely true, as white SEO campaign has been rolled out to gain a result in the long run. But what should you do for short term business marketing? White hat followers would often say that short term SEO is black hat SEO. In reality, it is not unethical or black hat. Every business has to follow the short term model to gain popularity. So, along with the long term plan, there should also be short term plans for business marketing. This is why following white hat blindly would not fetch any profit in the short term. To gain short term results, you need to focus beyond white hat techniques. You need to try creative techniques for better SEO campaign result.

  1. Black Hat SEO Is Not Hacking

There is no doubt that a few black hat techniques are completely unethical and they would fetch dire consequence for your website. Google will ban your website if it finds that you had indulged in black hat techniques. For example, hacking competitors or other websites to gain business marketing leads for your business is surely unethical. But, it needs to be reminded that black hat is not all about hacking. There are a few things that can be used for fetching good business marketing result. Trying something different from the guidelines of Google is absolutely essential to fetch better business marketing result.

  1. Google Guideline Is Not the Ultimate Thing

Though following Google guidelines is highly recommended to the SEO experts or business marketing campaigners, you need to find something beyond those guidelines. Those guidelines are just guidelines, not rules. White hat experts often take Google guidelines as rules, and they hardly think anything beyond that. So, it becomes difficult to beat rivals. To beat your rivals, you need to follow some creative and unique business marketing techniques as well as tactics. This will eventually help you to gain formidable success with your SEO campaign.

  1. Myth: Only White Hat Can Generate Value

Today, it is a myth that only white hat SEO techniques can generate value for your online business. You shall eventually find that methods beyond white hat SEO can also generate value. In fact, you would not be able to create value with simple white hat SEO techniques. You need to think beyond the white hat techniques to fetch more values. This is why you need to follow the creative ideas for business marketing success.

  1. Anything beyond Google Guidelines Is Not Unethical

Business owners and webmasters have to understand that anything beyond Google’s guidelines is not unethical. Creativity has no limits, but you need to follow some ethical techniques or ideas. You should not try to outsmart Google, as the algorithm of Google is smarter than you. You should focus on creative ideas beyond the conventional ideas for a business marketing campaign with SEO techniques.

Overall, it needs to be stated that SEO has to be ethical, and there should be a perfect balance between different SEO techniques or methods. You need to come out of the idea of white hat SEO. Maybe it is the high time to change your hat! Get in touch with New York SEO to know accurate details of White Hat SEO.


This sad story is telling you how we, Norde design agency, lost our team account which was one of the most popular on Dribbble.

I used to work as a freelance designer for quite a long time, and almost all my growth, including the starting of an agency, was revolving around Dribbble. First of all, I want to mention that Dribbble is a unique community that allows any designer to make progress and be in demand, regardless of work experience, place of residence, language, background and education. I cannot imagine a designer who would not benefit from regular posting on Dribbble:

  • Want to test some design ideas or style? Show them in uncluttered and straightforward shots.
  • Want to gain more clients as a freelance designer? Post a lot of simple, exciting and clear works.
  • Want to improve a skill? Practice more in your shots and watch the dribbbler’s reaction.
  • Want to find a job as a designer in a promising US or European company? Create meaningful design concepts.
  • Want to find a job at an agency? Make sure you do diverse work and do a lot of work.
  • Want to sell your design products? Offer them in your shots.
  • Want to understand what your true value is? Dial your ego down and compare the reaction of how people rate your works with how they rate someone else’s shots.

On the whole, Dribbble can be incredibly beneficial to any designer.

You can often come across critical articles and individual opinions about the “dribbblization” of design, that is, about its movement towards primitivization and neglect of the end user. But for me, the community has always been a source of inspiration, and most importantly, it taught me to generate my own ideas in design.

Whether we like it or not, design is a world of copying, borrowing and iterating and here like nowhere else we should value ingenious ideas and the people who can produce those ideas.

I have defined Dribbble as a kind of “haute couture” design, not as a platform for portfolios where designers exhibit their current works but a home for exciting ideas and demonstration of designers potential.

I am still hiring designers and illustrators and I find the best ones on Dribbble.

I’m going to start from the very beginning — will tell you how I was becoming part of the Dribbble community. I am assuming that if you are reading this article, you must be a member of Dribbble community and very likely have gone through or maybe going through the same feelings.

In the beginning, I started posting all my good works that had been accumulated for a long time. Here you go, Dribbble and Behance! So, I posted them. No effect. Whatsoever. No any ‘great work!’, ‘awesome!’ or even ‘I love it! Check out my works’. I was so mad. My designer ego huddled itself up in the darkest corner, and it hardly fitted in. It took my mind a lot to realise that some local success doesn’t give me any right to think that I can be demanded as a designer in a global market.

The start of my grow on Dribbble coincided with my moving to London, so having got over my first failures, I decided to try again. I had to enter the global market; I couldn’t see any other way out: I didn’t want to work in an office, and the fact of living in London and working remotely with Russia seemed to be quite weird.

I started to look carefully at other designers’ works that clients are interested in, and also I watched the growth dynamics of the authors of hot shots. I had mixed feelings, from bewilderment and indignation to admiration (the latter was a rare thing — remember my ego).

I began to make feeble attempts to become part of this world, but it was excruciating to realise that I was not worth anything as a designer at 27.

When I started seeing some results, when the audience began to grow very slowly, I could occasionally fall into apathy, because I believed that no effort or good work could make me popular or at least a little popular due to the fierce competition. But in fact, these are just the realities of the global market, and I lived in my comfortable little bubble and did not want to notice anything around.

Starting on Dribbble was hard, long and complicated, it stretched for many months, and only a couple of years later I was ready to admit that all this was not in vain.

The stage of sustained growth began; I grew as a designer, tried different approaches in design, tried to find my own style. Over time, I began to feel confident as a freelance designer from a financial point of view, as there was no shortage of work.

Soon I began to delegate part of the work to fellow designers, illustrators and web-developers, and it became obvious for me that it is the time to move all my activities from freelance to a full-fledged business. It was a bit scary, but I saw the example of Haraldur Thorleifsson, whom Dribbble helped grow Ueno — one of the best agencies on the market today. This Haraldur’s article about his success on Dribbble inspired me to start Norde.

Several months later, Norde began to grow fast, and most of my time was occupied not by design, but by the search and hiring designers. That was quite an odd feeling: I’m in the design industry but I’m much less dealing with design.

By the way, there is one crucial observation that designers most likely do not know about Dribbble (unless you have gone through a sagnificant growth there): the best clients you can imagine come from Dribbble — the majority understands the design (apparently, they like yours) They are attentive to details, they know costs of a good quality design and how the design process should go and how long it takes.

We worked seven days a week for a year, hired people and worked a lot with designers and illustrators on a contract basis. But even so, I tried to devote much time to our Dribbble account, producing new ideas and concepts for the community.

We were successfully growing until last October when our account got suspended. It was the second and the final account suspension due to some complaints received against Norde. It took me a good deal of time to get the customer support reveal the details and sources of those complaints, but no luck. At some point, they just stopped replying to us. I tried to contact Dribbble’s CEO or founders, but no response. Nevertheless, we did not breach any rules of the community; the decision was private, outside the official rules. And we were not even refunded the fee for the Dribbble team account.

I don’t want to blame Dribbble because the community gave me so much. Six months later, I recall that experience composedly, but at that moment I was overfilled with various emotions. It was excruciating to fire a few of our full-time designers and end contracts with some remote contractors. I was angry with Dribbble because the decision was one-sided; they didn’t give us any chance to appeal. I was hurt that for five years of my constant contributing to the community — hundreds of shots I worked on, the designers I campaigned to become active members on Dribbble, talks at conferences and meetups about the benefits of Dribbble — all that couldn’t give me any right to appeal to the decision and talk to someone from Dribbble.

What you can see today is a dull grey profile to remind of us. Besides, Google Analytics still counts visits on Norde’s shots, and the number of daily views is over 3,000.

I wrote this post for people who supported Norde and who asked us about what happened. Also, it is for designers and design agencies who can repeat our mistakes. I have learned some lessons from this experience and ready to share them with you:

  • Being different is essential.

    In a fast-paced and highly competitive world, you need to be very different from others, and you will be noticed. This very principle formed the basis of the Norde’s growth, and now I will follow it in everything: each aspect of work and life can be done a little differently.
  • Don’t put eggs in one basket.

    I was so much impressed by clients from Dribbble whom I could call like-minded people that I even didn’t pay attention to other sources of leads.
  • Don’t put eggs in one basket 2.

    Hard work on current projects didn’t allow us to raise our heads and think thoroughly about something other than design, or work on our own in-house products and think about something big — more significant than an agency.

Five years of being an active member of Dribbble, five years of posting shots, promoting “designer’s freedom” via Dribbble. This stage of my life has ended. That is why I think I have enough knowledge and can frankly tell you about my growth on Dribbble, its mechanisms, many processes, including financial aspects for designers as well as for agencies. Let me know if you are interested in reading this.


Recently, MongoDB held a design meetup that focused on imposter syndrome and, more importantly, how we deal with it as designers. I was asked to speak on behalf of my own experience and share advice for those who may also be in this headspace.

Personally, I suffered massively from these feelings of self-doubt as I was just starting out in product design two years ago. While I can’t say it completely no longer affects me, I can say with confidence that it doesn’t bother me in the same magnitude nor as frequently as it used to. I hope the advice that I’ll be sharing helps those of you that are trying to overcome it yourselves.

Illustration by Sidecar

#1. Everyone starts off as a beginner. Don’t compare your timeline to someone else’s.

Tying it back into the definition, imposter syndrome is simply another term for navigating your career as a beginner. It’s a psychological phenomenon that arises from an incorrect assessment of ones’ abilities compared to peers. The panic that comes from the feeling of not knowing enough, and the fear that others will find out if you don’t know.

I’m sure this may be the case for those of you who are trying to transition into this industry or maybe those of you who are also in their first roles.

Illustration by Udhaya Chandran

For me, this hit home most when I was switching into design as a college senior in late 2017. After realizing I no longer was as passionate about a role in computing as I had originally thought, I decided to pursue product design when I was at the cusp of graduating university. At this time, many of my colleagues were deciding between multiple job offers to amazing companies, accepted into prestigious graduate schools, or were wrapping up their interview phases.

For someone meticulous in planning various aspects of her future, I felt hopeless. I thought that after three years of college, those arduous semesters would amount to somewhat of a solid foundation for my career. However, the work experience I had accumulated until that point consisted of mostly software engineering internships. Additionally, I had no concrete plans after school and felt so massively behind those who had studied a design discipline in college.

“Everyone starts off as a beginner. Don’t compare your timeline to someone else’s.”

Illustration by Manoj Jadhav

However, I knew that sulking about my situation was not going to get me anywhere. I accepted the fact that my timeline just happened to be a bit delayed than others. While some of my peers knew they wanted to be rocket scientists before they got to college, I just happened to find out what I wanted to do later. I was starting fresh and anew and I didn’t have to be so hard on myself. I just had to focus on what I could control — putting in the work.

2. Keep learning and constantly create. Hold your standards high.

Product design is a field that encapsulates many other disciplines of design: visual, information, interaction, graphic, and user experience. It’s a balance of and consideration between these different pillars that truly breeds a great designer. Not to mention, crafting thoughtful, intuitive designs is never a one-person job. Learning how to consider business needs and user needs with product managers, engineers, and stakeholders is imperative to crafting the best designs. Thus, there’s always a plethora of knowledge to learn and grow as you navigate your career.

“By raising your standards, you’ll create a reputation for yourself and your projects.”

Luckily, we live in a digital age where there are a ton of resources online. My personal favourites include the Muzli Chrome extension, where new design inspiration and news are shown whenever you open up a new Google Chrome tab; Medium, a platform where writers from some of the best design teams share their processes and practices; and Dribbble, where you can get inspired by some of the most beautiful interface and graphics by creators worldwide.

Homepage of Dribbble

Most days though, I gravitate towards a good-old fashioned book. Reading has always been a favourite pastime of mine and there are classics that all designers are recommended to read. To drop some names of recent books I’ve really loved, I’d recommend “Ruined By Design,” a novel that discusses designers’ responsibilities in our technology-ridden world and the importance of using our powers in the most respectable ways; “The Great Discontent,” a magazine series filled with interviews with individuals from a variety of creative disciplines, and “So Good They Can’t Ignore You,” which has inspired me to find happiness in the process of mastering my craft.

Illustration by Ed Craddock

Always strive to up your skills as a designer and to raise the bar for what you consider “good work.”

3. Practice again and again. Trust the process (and handle rejections like a boss).

This is one of my favourite quotes of all-time, probably because it’s held a lot of truth in my own journey. Advice and knowledge are most valuable when you take action.

Illustration by Sidecar

That senior year of college, I can’t remember the number of times I stayed up late to work on my portfolio. I remember cranking out mockups on Sketch until my hand cramped and thinking about how to solve mock interview prompts on my commutes home. I also applied to any designer openings I saw and cold-emailed designers I admired to ask for their time.

“Luck is preparation meeting opportunity.”

While it felt like my efforts had been going to dust in the early beginnings, I started to notice myself improve. I felt so much more comfortable communicating about my former projects and I was no longer felt clueless when a company sent over product design challenges for my applicant candidacies. Rejections still stung, but confidence was slowly emerging in the face of my dedication.

During my job hunt, I kept a spreadsheet of all the companies I applied to. I’m pretty sure I still have it saved on my laptop at home. If I had to guess, there were probably close to 75 rejections; some companies didn’t even bother getting back to me on the status of my application.

Illustration by Sevdenur Ozkan

It wasn’t until six months later until I received my first offer: a summer product design internship at BuzzFeed. I still remember jumping up and down after the phone call I had with the hiring manager that day in March. It felt like a total dream and as though my hard work had finally paid off. A short two weeks afterwards, I received my second offer: a fall product design internship at Wish. Again, I was pinching myself and couldn’t believe that I was seeing the light at the end of the tunnel. Although these two offers weren’t for full-time positions as I had intended, I was still massively ecstatic knowing that I was spearheaded in the right direction. If I had given up early on, I would’ve never seen the day where I landed amazing opportunities I only would have dreamed of having half a year earlier.

4. Take full advantage of the opportunities around you. Everyone’s willing to help.

Remember how I mentioned I had cold-emailed designers I admired to chat? I’d say about 70% of them usually responded within a week and agreed to hop on a video call. I’d listen to their own stories of their journeys as designers, ask for comments on my portfolio, and practice presentations with all of these wonderful people who offered their time. You will never know what kind of help you can receive if you never ask.

After every session, I’d make revisions to my work and my interview skills got better and better. If it weren’t for these conversations, I don’t know if I would’ve gotten the necessary feedback I needed to improve in the right areas and land my internships, and ultimately my job here at MongoDB.

“You will never know what kind of help you can receive if you never ask.”

Illustration by Siyang

At every internship I’ve had, I’ve also always tried to grab coffee with everyone I work with and those around the office. Learning about everyone else’s backgrounds and experiences not only helps build relationships, but it also teaches you so much about what they’ve learned and can share.

5. Choose to work at a place with people that support and believe in you.

The average person will spend 90,000 hours at work over a lifetime. Without a doubt, your coworkers are the people you’ll interact with the most out of the entire week and maybe the rest of your life. Make sure you choose to place yourself in an environment where you’re valued and lifted.

I’m grateful to be surrounded by peers and mentors here at MongoDB that make me feel supported in all of my endeavours. I’ve gotten approval for proposed side projects in collaboration with different teams. I’ve had my own blog posts shared by others on LinkedIn. I’ve even gotten applause after a presentation at a design review meeting.

“Make sure you choose to place yourself in an environment where you’re valued and lifted.”

Illustration by Uran

Small and big wins shared with the individuals around me are what make coming to work that much more enjoyable for me every day. Although I know it is a privilege to choose between different jobs or companies, if you have the choice, just make sure you champion the importance of who you get to work with.

With that being said, I’d like to conclude this article with a question for you readers — what actions will you take to overcome your imposter syndrome?

Special thanks to the rest of the design team who came out to support me at my first talk. Also, thanks to Sean, our VP of Design, who extended me the opportunity to give this talk, and Dan Zhu, who organized the event.

“If you have the choice, just make sure you champion the importance of who you get to work with.”

All opinions are my own, views are my own, opinions and any advice given here are my own and do not represent an official statement by my employer, etc.

Originally posted on Michelle’s Medium page.


One of the problems with coining a term like “user experience” or its acronym counterpart “UX” is that it opens up the floodgates for other trendy experience-related acronyms to enter the web design lexicon.

CX, DX, EX, HX, JX, PX, UX, (U)XD…

Is all of this really necessary though?

While I don’t think you need to go adding EX or JX to your vocabulary anytime soon, it’s still a good idea to educate yourself on what these X acronyms mean and how to use them to your advantage in business.

The X’s of Web Design and Marketing

The two most common experience acronyms in web design and marketing are UX and CX. What you may be surprised to learn, however, is that the “X” in these acronyms doesn’t always stand for “experience” nor does it always pertain to the end customer.

Let’s review what each of the X acronyms means and then we’ll talk about which ones you actually need to worry about and use.

Customer Experience (CX)

CX refers to the quality of interactions a customer has with a brand, from the very first encounter to their very last. As such, customer experience is the most important of all the X’s to monitor, measure, and maintain.

Think about all of the places where the CX could go off the rails:

  • A broken form on the website dissuades them from trying to connect with a brand;
  • A support representative fails to respond in a timely fashion, leaving the user feeling helpless;
  • The customer makes a purchase every month for two years, but has noticed a degradation in quality over time.

This is why it’s so important for businesses to have a game plan from Day 1 — especially one that ensures a consistent delivery of products and services throughout the lifetime of a customer relationship. Any misstep in CX could cost a brand a customer’s business and loyalty.

Digital Transformation (DX)

DX refers to a technological evolution within a company. Although it’s not a term you commonly hear thrown around, it’s happening around us all the time.

If you’ve ever made a digital shift within your own business (say, from one OS to another or from a manual process to one that’s automated), you know what far-reaching effects it can have. Your time, money, and sometimes even your clients can be impacted by the change if you don’t prepare for it in advance.

Imagine what happens when it’s not just a sole business owner or freelancer who’s affected by a digital transformation.

Emotional Experience (EX)

There are two ways in which “EX” may be used in design or marketing. This is one way.

Think of emotional experience as a subset of user experience. Instead of focusing on developing a clear set of steps that take a user through their journey, EX design and marketing focus on the elements that evoke strong emotions: Powerful color palettes; Nostalgic images; Messages of urgency.

Any time you build something with the intent of pulling on someone’s emotions, that’s emotional experience design — and it’s a really common thing we do today, even if we don’t all go referring to it as EX.

Employee Experience (EX)

This is the second use of EX you may encounter, though it’s not very likely unless you’re working in a digital agency environment. Even then, this is the kind of term that only corporate might use.

While it might not be a commonplace phrase, the concept is a good one to flesh out, whether you work in a team atmosphere or you have aspirations of hiring your own team someday. All employee experience really refers to is how team members feel about and respond to a work environment and their organization as a whole.

Essentially, EX is UX for an internal organization. And by researching what employees want, collecting feedback on how they feel, and reviewing data on their productivity and job satisfaction, companies can effectively improve the employee experience — which should have a trickle-down effect to CX.

Human Experience (HX)

I’ve heard it said that HX is all about taking UX and CX to a new level.

Even though they’re both meant to create a more pleasing end user experience, the belief is that there’s still too much focus on the technology instead of the humans we should be serving. That it’s only when we stop focusing on how technology can attract and convert and please more customers that we can fulfill the real purpose of a company.

While honesty, transparency, and ethics are the kind of ideals every brand should strive for, it’s not always realistic to prioritize them what with how difficult it is to convince users to convert. There’s just too much information competing for their attention right now. So, while it’s nice to think about being able to market and sell a company to human beings instead of generalizing them as “users” or “customers”, that’s just not feasible for newer and smaller companies.

That said, I think HX is still a worthwhile concept to keep in mind. While you might not be able to do much with it now, it can certainly be a game-changing differentiator once a brand has long been established.

Job Transformation (JX)

JX and DX go hand-in-hand.

Basically, as companies adopt more and more digital solutions, and those solutions become more complex (thanks in part to AI), jobs are going to change. So, rather than hire IT specialists who can manage on-site hardware and software, businesses will be looking for AI specialists and cloud service providers who can help them make the most of their all-digital operation.

Partner Experience (PX)

PX may refer to one of two things. For this one, the partner in the experience could be a business partner, product supplier, SaaS provider, etc. Basically, any third party who you have a relationship with.

As far as web design and marketing goes, PX can affect you in a number of ways.

For example, if you were to manage web hosting on behalf of your clients. You notice that their site’s gone offline, so you reach out to the customer support representative from the web hosting company, but they’re either non-responsive or have no clue what the heck is going on. Who do you think your client is going to be upset with? No matter how much you try to pass the buck, you’re the one who’s set yourself up as the go-between, so it’s going to fall on you.

Now, let’s say you’re a solo web designer and want to partner with a copywriter since clients keep asking for help in that area. In that case, PX could affect you in a similar fashion. If the writer were to fall short in their duties (or vice versa), not only would your relationship with them be compromised, but the relationship between you and the client would as well.

Bottom line: the relationships you have with partners and suppliers plays a critical role in your success, so you do need to spend time focusing on those experiences.

Public Experience (PX)

PX, in this instance, is more likely to be used by agencies that specialize in branding and market research. That’s because this one has to do with how a brand is perceived by society. And all of the other acronyms contribute to it.

For instance:

  • An employee believes they were unfairly fired and puts the company on blast on Facebook. It gets picked up by a major news source and the story goes viral.
  • A website is hacked the day before Black Friday, leaving thousands of users without a place to buy all of the gifts they were hoping to get on sale that holiday season.
  • A company releases a new app which parents are calling for a ban on because it reinforces unhealthy stereotypes.

From the product itself to how the company engages with the public, there are many ways in which the PX may be affected. While each of the contributors — including you the web designer — have to be cognizant of how their choices and actions may affect the public image of a brand, it’s more likely the branding team will need to worry about PX.

User Experience (UX)

You’re probably already familiar with UX. This is the term we use to describe how a user (visitor) feels as they walk through a website or app. And how each step they take and each interaction they make, adds up to an overall experience.

In order to “create” a user experience, designers, developers, writers, and marketers need to be able to step inside the shoes of their users and build a journey tailor-made for them. I’ll explain in more detail how that happens in the next point.

(User) Experience Design (UXD)

The subject of user experience design is a common one discussed here. Just recently, the following UXD topics have been explored:

UXD is a discipline that requires a lot of research, attention to detail, and testing. And the end result is a website or app that’s highly usable, accessible, and enjoyable. That’s because every element, step, and interaction has been carefully thought through. And not only that, the experience is constantly reevaluated, tested, and updated to continually serve the end user.

As far as you’re concerned, I’d say that UX/UXD is the most important acronym for you to concern yourself with.


The fact of the matter is, there’s a lot of value in accepting the underlying principles of these acronyms. However, I’m not sure we need to make “designer speak” sound any more complicated than it already is.

After all, your clients don’t want to hear you talk about how DX is affecting the way we build the UX of websites. They want real speak. They want to know what exactly you’re going to do for them; not spend extra time asking you to elaborate on what all of that design jargon means.

Plus, if you do get caught up in all of these “experiences”, you might not get anything done. What I’d suggest is to focus on the ones that matter:

UX — even if you’re not an official UX designer by trade — is incredibly important.

CX is another must, though the only CX you can fully control is your own. You’ll have to trust that the clients you work for will deliver the rest on their end.

I also think DX is a good one to keep in the corner of your mind.

Technological advancements aren’t going to stop anytime soon and you’re working in a field where the tools you use and the tech that affects your business are constantly changing. So, while you might not talk about “DX”, you do need to accept that it’s going to have a profound effect on how you work, how you develop processes, and what you’re able to do for clients.

Like I said earlier, the underlying concepts of each of these X acronyms are valid and do hold some value for you as a web designer. As you work on growing your business — by adding more services, hiring employees, upgrading your tech — it would serve you well to keep these in mind to ensure you maintain a positive experience across the board.

Featured image via Unsplash.



Product design is a worthwhile endeavor, and a lot of professionals are into it, not just making a few bucks but actually making a living off it. The main problem is in the area of acquiring the skill — product design as a skill does not come cheap. You have to spend a lot of time learning, practicing, and making mistakes. However, when you become skilled in it, it becomes rewarding.

There are actually many ways you can make money as a product designer. You just have to follow a path and stick to it until you succeed. Let’s take a look at some of the ways you can earn money with your product design skills.

The conventional and most popular way of making money as a product designer is to get a job in a firm where your skill is needed. In fact, most product designers usually start with this option. One good thing about this and why it is so popular among designers is that many businesses and corporations require designs, and they require them in-house.

One of the advantages you will have getting a job as a designer is that you will have the opportunity to improve your skills by working with other designers more experienced than you are. You’ll agree with me that going solo is not the best for a new designer, and getting a job is a good option since you won’t be making mistakes that will get to the client.

There is a prediction that in 2020, 50 percent of the United States workforce will engage in freelancing whether part-time or full-time. And guess what, designers are some of the early birds in the world of freelancing. In fact, many businesses do not actually have in-house designers, and as such, they look out for freelancers to outsource the job to.

When we talk of freelancing, you have two routes you can go through. You can either go to freelance websites like Upwork, Fiver, Freelancer, and other sites and get gigs from there. The other option is to get clients independently from other sources, which could be offline or from social media sites and design websites like Dribbble and Behance.

Yes, you probably are not aware, but designers with the right skills can also serve as consultants. However, you cannot be a consultant with little to no experience. In fact, to serve as a consultant and provide consultation service, you have to be one of the tops on the line in your field — nobody pays an amateur to consult for their business.

However, when you get to that level, the opportunities become endless. You’ll have the option of consulting for a good number of product design teams that need professional guidance and best still; you are actually not designing but just giving guidance. And needless to say, there is big money in consulting.

If you look at the four methods discussed above, they are all centered around working on the designs of other people’s products. The question now is, why not get creative and create your own product? After all, you are a product designer. I have seen where some designers created their product and sell them for profit, and they are making good money.

Some get creative with merchandise, others bring a team together and build a tech product that’s design inclined, and it all worked out. The major problem associated with going this route is that you need to have a solid go to market strategy on the ground because frankly, making a product is just one part of the equation, selling it is the most important aspect of it all.

Prior to getting my feet rooted in the industry, I did a lot of learning, and one of the media I used was reading blogs. I learned a lot, reading from the expert knowledge on the blogs I visited then. I know I have slowed down on reading blogs, though, but I still cannot forget to mention how helpful it was to the success of my career.

Plant’s blog you are reading right now is somewhat of a design blog but centered on core design, though, as our product, Plant, is a version control system for designers that integrate well with Sketch on Mac. If you have the required knowledge, you can start sharing your knowledge by blogging it and make money from premium courses, display ads, mentorship, and affiliate marketing.

While blogs are some of the major sources of information in the current world we are in, we cannot deny the fact that books hold more information than blogs. If you are well versed in a particular aspect of design, you can simply write a book about it and get it out to the world. Unlike in the past that you need a publisher to get your book to the market, with platforms like Amazon Kindle Publishing, you can get your book out in no time.

However, make sure your book is worthy of being published else; it will be filled with bad reviews. If you do not want to publish on Amazon, you can sell it yourself if you already have a fan base. Marketing is critical here because frankly, there are many other design books out there, and people need to know why they should read your book and not someone else’s.


Mozilla has warned Firefox users to update their browser to the latest version after security researchers found a vulnerability that hackers were actively exploiting in “targeted attacks” against users.

The vulnerability, found by Chinese security company Qihoo 360, was found in Firefox’s just-in-time compiler. The compiler is tasked with speeding up performance of JavaScript to make websites load faster. But researchers found that the bug could allow malicious JavaScript to run outside of the browser on the host computer.

In practical terms, that means an attacker can quietly break into a victim’s computer by tricking the victim into accessing a website running malicious JavaScript code.

But Qihoo did not say precisely how the bug was exploited, who the attackers were, or who was targeted.

Browser vulnerabilities are a hot commodity in security circles as they can be used to infect vulnerable computers — often silently and without the user noticing — and be used to deliver malware or ransomware. Browsers are also a target for nation states and governments and their use of surveillance tools, known as network investigative techniques — or NITs. These vulnerability-exploiting tools have been used by federal agents to spy on and catch criminals. But these tools have drawn ire from the security community because the feds’ failure to disclose the bugs to the software makers could result in bad actors exploiting the same vulnerabilities for malicious purposes.

Mozilla issued the security advisory for Firefox 72, which had only been out for two days before the vulnerability was found.

Homeland Security’s cyber advisory unit, the Cybersecurity and Infrastructure Security Agency, also issued a security warning, advising users to update to Firefox 72.0.1, which fixes the vulnerability. Little information was given about the bug, only that it could be used to “take control of an affected system.”

Firefox users can update their browser from the settings.


Jan 07, 2020 | Tutorial

Excellent Free Tutorials to Learn Swift

Swift is a powerful and intuitive general-purpose programming language for the OS X, iOS, watchOS, and Linux operating systems. It’s developed by Apple Inc. Swift is intended to be more resilient to erroneous code (“safer”) than Objective-C, and more concise.

Swift is a new language, first appearing in 2014. It’s friendly to new programmers, feels familiar to Objective-C developers, and the language is optimized for development. It was launched under a proprietary license, but Apple made the language open source in December 2015 by releasing Swift 2.2 and later under the Apache License 2.0. By open-sourcing Swift, developers are able to use the language for their own purposes and go beyond OS X, iOS and watchOS apps.

As at December 2019, Swift ranks 10th on the TIOBE Programming Community index, an indicator of the popularity of programming languages.

Here’s our recommended tutorials to learn Swift.

1. Swift tutorial for beginners by Fabio Rocha

This offers a small overview on Swift creating a small “Hello World” app. There’s also a second part.

Read the tutorial

2. Swift: Making Web Applications Secure by Construction by K. Vikram

This is a comprehensive piece that walks you through the whole routine of prototyping a web application from scratch. It’s divided into eight parts, each of which has its own dose of explanation and advice.

Read the tutorial

3. Swift by Example by Brett Bukowski

Learn Swift by 24 examples. All good material.

Read the examples

4. Swift Tutorial Part 1: Expressions, Variables and Constants by Lorenzo Boaro

This is a mini-series on getting started with programming in Swift. In this series, you’ll learn some Swift programming basics using playgrounds.

Read the tutorial

5. Swift Tutorial: Building an iOS application

This series of tutorials takes the reader on a journey of building an iOS app in Swift.

Read the tutorials

6. A nice collection of often useful examples done in Swift

A collaborative list of awesome Swift libraries and resources.

Read the tutorial

7. Beginner Series: Swift Tutorial by codeconquest

Swift Tutorial introduces you to the language, describes all the finer details of Swift, and helps you begin working with it.

Read the tutorial


SwiftSyntax : generate and transform Swift source code

Swift Operators Precedence and Associativityclass=”link”>


So I was recently asked why I prefer to use free and open source software over more conventional and popular proprietary software and services.

A few years ago I was an avid Google user. I was deeply embedded in the Google ecosystem and used their products everywhere. I used Gmail for email, Google Calendar and Contacts for PIM, YouTube for entertainment, Google Newsstand for news, Android for mobile, and Chrome as my web browser.

I would upload all of my family photos to Google Photos and all of my personal documents to Google Drive (which were all in Google Docs format). I used Google Domains to register my domain names for websites where I would keep track of my users using Google Analytics and monetize them using Google AdSense.

I used Google Hangouts (one of Google’s previous messaging plays) to communicate with friends and family and Google Wallet (with debit card) to buy things online and in-store.

My home is covered with Google Homes (1 in my office, 1 in my bedroom, 1 in the main living area) which I would use to play music on my Google Play Music subscription and podcasts from Google Podcasts.

I have easily invested thousands of dollars into my Google account to buy movies, TV shows, apps, and Google hardware devices. This was truly the Google life.

Then one day, I received an email from Google that changed everything.

“Your account has been suspended”

Just the thing you want to wake up to in the morning. An email from Google saying that your account has been suspended due to a perceived Terms of Use violation. No prior warning. No appeals process. No number to call. Trying to sign in to your Google account yields an error and all of your connected devices are signed out. All of your Google data, your photos, emails, contacts, calendars, purchased movies and TV shows. All gone.

I nearly had a heart attack, until I saw that the Google account that had been suspended was in fact not my main personal Google account, but a throwaway Gmail account that I created years prior for a project. I hadn’t touched the other account since creation and forgot it existed. Apparently my personal Gmail was listed as the recovery address for the throwaway account and that’s why I received the termination email.

Although I was able to breathe a sigh of relief this time, the email was wake up call. I was forced to critically reevaluate my dependence on a single company for all the tech products and services in my life.

I found myself to be a frog in a heating pot of water and I made the decision that I was going to jump out.

Leaving Google

Today there are plenty of lists on the internet providing alternatives to Google services such as this and this. Although the “DeGoogle” movement was still in its infancy when I was making the move.

The first Google service I decided to drop was Gmail, the heart of my online identity. I migrated to Fastmail with my own domain in case I needed to move again (hint: glad I did, now I self host my email). Fastmail also provided calendar and contacts solutions so that took care of leaving Google Calendar and Contacts.

Here are some other alternatives that I moved to:

Migrating away from Google was not a fast or easy process. It took years to get where I am now and there are still several Google services that I depend on: YouTube and Google Home.

Eventually, my Google Home’s will grow old and become unsupported at which point hopefully the Mycroft devices have matured and become available for purchase. YouTube may never be replaced (although I do hope for projects like PeerTube to succeed) but I find the compromise of using only one or two Google services to be acceptable.

At this point losing my Google account due to a mistake in their machine learning would largely be inconsequential and my focus has shifted to leaving Amazon which I use for most of my shopping and cloud services.

The reason that I moved to mostly FOSS applications is that it seems to be the only software ecosystem where everything works seamlessly together and I don’t have to cede control to any single company. Alternatively I could have simply split my service usage up evenly across Google, Microsoft, Amazon, and Apple but I don’t feel that they would have worked as nicely together.

Overall I’m very happy with the open source ecosystem. I use Ubuntu with KDE on all of my computers and Android (no GApps) on my mobile phone. I’ve ordered the PinePhone “Brave Heart” and hope to one day be able to use it or one of its successors as a daily driver with Ubuntu Touch or Plasma Mobile.

I don’t want to give the impression that I exclusively use open source software either, I do use a number of proprietary apps including: Sublime Text, Typora, and Cloudron.


There are plenty of opportunities for friction in the user experience when logging in, particularly while entering a two factor authentication code. As developers we should be building applications that support the need for account security but don’t detract from the user experience. Sometimes it can feel as though these requirements are in a battle against each other.

In this post we will look at the humble element and the HTML attributes that will help speed up our users’ two factor authentication experience.

The default experience

When you implement two factor authentication for a web application, perhaps with the Authy two factor authentication API, you will need a form for your user to input the one time password you are going to send them. You might create something similar to the following HTML:

<form action="http://www.webdesignernews.com/sessions/check-2fa" method="POST">
    <label for="token">Please enter the code you were sent:label>
    <input type="text" name="token" id="token" />
  <button type="submit">Check tokenbutton>

This is a good experience already. The has a name and a unique ID and the is using the correct for attribute to explicitly associate with it, which is important for accessibility. You could also wrap the around the for the same effect. With a bit of CSS, this might look something like this:

A web page shown in iOS Safari with a two factor authentication prompt. The standard alphabetical keyboard is open.

However we can progressively enhance this experience with just a few more attributes.

Getting the right keyboard

On mobiles or devices with on-screen keyboards the first thing to notice is that we are presenting the full alpha-keyboard. One time passwords are made of numerical characters so it would be much better to present the user with a number pad.

You might think that switching the type of from “text” to “number” is the solution here:

    <input type="number" name="token" id="token" />

You would be wrong though. This does trigger a different keyboard on iOS, but it still includes a number of useless keys.

A web page shown in iOS Safari with a two factor authentication prompt. This time the keyboard includes numbers and symbols.

Changing the type of field changes the way the browser interprets that field. It could cause errors too; if the two factor authentication code starts with a zero a number field may drop that leading zero.


The inputmode attribute changes the keyboard the browser should display without changing the meaning of the data the field collects. We want our to receive text input, but from the numeric keyboard. So instead, add inputmode="numeric"

    <input type="text" name="token" id="token" inputmode="numeric" />

inputmode has a number of other values, including “tel” for telephone numbers, “email”, “decimal”, “url”, “search” and “none” in case you want to render your own keyboard. This article on CSS Tricks has all the details you need for the different inputmodes.

Browser support for inputmode is good for mobile operating systems these days, but a couple of years ago it was in the wilderness. For older browsers there is another trick to trigger the numeric keyboard and include a bit of extra validation for free.


The pattern attribute allows you to validate the contents of an using a regular expression. Using the pattern [0-9]* tells the browser that we only accept numbers in the field and also triggers the number pad in browsers without inputmode support.

Our HTML looks like this now:


And the keyboard is a much simpler numerical input:

A web page shown in iOS Safari with a two factor authentication prompt. This time a simple keyboard of just numbers appears.

One thing that would make this form even easier to fill would be autocompletion of the one time password.

HTML autocomplete

According to MDN “autocomplete lets web developers specify what if any permission the user agent has to provide automated assistance in filling out form field values, as well as guidance to the browser as to the type of information expected in the field.”

In iOS and Safari on macOS we can take advantage of this to have the browser suggest two factor authentication codes that are sent to the device over SMS. Adding the autocomplete attribute with the value “one-time-code” will trigger this behaviour.


With this, our is complete and the experience for the user now looks like this:

A web page shown in iOS Safari with a two factor authentication prompt. There is also a message notification with a two factor authentication code and the keyboard is auto suggesting the code to be filled in.

Other useful autocomplete values

There are many autocomplete values available, covering everything from names and addresses to credit cards and other account details. For sign up and login there are a few autocomplete values that stand out as useful hints: username, email, new-password, current-password.

Browsers and password managers have very good heuristics to find login forms on web pages, but using the username and current-password values make it very obvious. You definitely want to consider using these attributes if you are building a login form with the username and password on different pages.

In a sign up form, make sure to use the “new-password” value as it triggers password suggestions in some browsers.

A sign in form in Firefox. The password field has suggested a strong password.

Autocompleting one time passwords in other browsers

This autocomplete behaviour only currently exists in Safari on iOS and macOS, but the Chrome team is investigating similar ideas to streamline this process. Currently there is an experiment with an imperative SMS Receiver API modelled on Android’s SMS receiver API. This would allow developers to extract the one time password from the SMS and, because it’s in JavaScript, instantly submit the form, saving the user more time.

At the time of writing this API is part of an origin trial which allows you to test it out and feed back to the Chrome team. If this interests you, sign up and give the API a whirl.

Better experiences through HTML

In this post we have seen that with just a sprinkling of HTML attributes we can improve the login experience for our users, particularly on mobile devices.

The element is one of the most interesting HTML elements we have access to. Depending on the attributes it can be a text field, a range slider, a file selector a button and many more.

If you are looking for an even better two factor authentication experience, then take a look at Authy push authentication which cuts out the copying of codes entirely.

Do you have any other tips on improving the login experience for your users? I’d love to hear them in the comments below or hit me up on Twitter at @philnash.


By don norman6 minute Read

In his annual Design in Tech report in 2019, my good friend and design visionary John Maeda declared that “design is not that important.” He is wrong, but also correct. Let me explain.

In his assessment, Maeda is not damning design—he is damning the wrong kind of designer. For technology companies, the wrong kind of designer can be pernicious, but the right kind of design (and designer) is essential.

Let me start by rephrasing his declaration to cover every discipline inside a company—marketing, sales, technology, manufacturing, service, etc.—where any one discipline is “X.” What Maeda was saying was that “‘X’ is not that important.” He is correct. Every discipline is important to the company (or else they wouldn’t be there), but no single aspect of a company is the most important. The best products are made by a collaborative team.

Maeda’s descriptions of designers forcing their will upon the company is an example of designers who put themselves above all others. This is bad, he said (although not in those words). Of course, it is bad—but why should it only apply to designers?

Well, here is one reason: Designers often lack the skills of teamwork, or mutual discussion, and of compromise. Oh sure, designers work in teams and often compromise, but only when the team is made up of designers. But designers need to learn to compromise with programmers, engineers, sales, marketing, service, manufacturing, and of course, management. Many do not wish to do this. In this sense, they are similar to the other disciplines, but worse.

What is wrong with designers? Much of this disinclination to collaborate is rooted in their education—especially if they were trained in schools of art and design. Design is not art. The value systems are different. The skills required are different. A designer needs to understand the world, business models (margins, basic finance, and accounting), marketing, sales, manufacturing, and service. And of course, designing technology that people can use, understand, and take delight in. If the designer is developing a service or a business model, then the designer must consider all the myriad kinds of people, organizations, and frameworks that need to come together to produce smooth, cohesive, delightful results. No discipline, not design nor any of the ones that are “X,” can do this alone: They must form collaborative teams with the other disciplines.

The goals of the organization should be foremost in the mind of the employees, whether designer, engineer, marketing person, or executive. Each discipline usually focuses on one dimension of the complex mix of issues that are important for the company. What do the customers need? This is one of the questions for designers. What do customers want, and how much will they pay? These are questions for marketing. Will the product work properly, be reliable, and be delivered at a reasonable cost? These are important issues for engineers. Will the price be right and will the end result deliver value for everyone? This is something everyone must focus on. Will it be easy to understand and to use—especially when something goes wrong? Here is where good design can reduce service costs, to say nothing of customer frustration. And will the end result enhance people’s trust in the company?

I am pleased to say that I know many designers who do not fit Maeda’s categorization, including Maeda himself. But these designers are rare: They were able to escape their training and learn to think broadly. In many cases, they started off with degrees outside of design—something that we might require of all designers. Yes, many designers do play important roles in companies, but their life might have been easier had they been trained better. And many designers never escape from their lack of general, broad knowledge. That is why we must change design education. We are in the 21st century, and although the craft skills of many designers still produce wonderful results, the world needs much more than that. We need designers who can tackle major social issues. Designers whose creative thinking can move us forward. Designers who recognize that no single discipline can solve the major problems facing us; instead it will require that designers work with scientists, engineers, businesspeople, ethicists, planners, developers, and politicians.

We need to teach designers the importance of teamwork with the other disciplines. You can’t design a great product without a collaborative team of every discipline, where everyone is willing to make compromises if they will benefit the customers and the company’s profitability. I worked at Apple in the “between Jobs” era, and even though we had the best product in the world of computers, we were going bankrupt. I learned many lessons from that experience. For example, it doesn’t matter how good your product is if people won’t buy it. And you can say that about everything. Why was Apple failing? Ah, that’s a wonderful topic for late-night drinking parties. As for me, I have my answers, but the proper treatment would require a book, and although I do write books, I probably will not write this one. I prefer books that show optimism, that talk about what we need to do for the future. I do not dwell on the past.

So why do I say that Maeda is correct? He is correct when he talks about many of today’s designers. Why? Because of the way they are educated. To correct this deficiency, we have to change the way we educate designers.

Today, most design training focuses upon the design skills that produce wonderful craft. For some kinds of designers, this is necessary, but it is not sufficient. Design training seldom includes business principles, the role of evidence, ethics, and the havoc that our design creates all around the world.

We destroy the environment by mining rare materials we use to make our gadgets. We destroy the environment in manufacturing and shipping, and we destroy the environment in making products that only last a few years, with materials that cannot be readily separated and reused. The results are catastrophic for all of humankind. These are all important issues for designers, because it is the stuff that they design that leads to many of these problems. But we cannot put all the blame on designers: They are the soldiers, not the generals. Designers currently play a small role in this because they seldom are in a position to decide what products should be produced. But with better training, we might see more designers in the c-suites of companies, and chief design officers could bring these issues to light.

How do we change education in design? I’m working on this. For one thing, we have to separate designers from art schools and departments. And we have to ensure that designers have a broad education about history, ethics, and civilization. They need better education on science and technology. And they need to recognize the importance of system thinking. We cannot just concentrate on the small—we also have to think big.

This requires a major shift in design education: a new set of curricula, a new depth of understanding. But watch this space. I hope that in the next few years we will see movement in this direction, for although we need a revolution, there are many willing revolutionaries who can gather together to make powerful changes for the good. It will take time, because the proper revolution will change the curricula of design schools all over the world. But other disciplines have made this change: medicine, law, and business all are examples of disciplines that in the past century or so made major changes in their approach to education. They are all excellent models.

There is no single, simple answer to how we will make the change. In the three disciplines I listed above, it took diligent work by many people, followed by several decades before most educational institutions changed. Now it is design’s turn.

I hope to prove John Maeda wrong, and I hope one of the people who will help me do so is John Maeda.


This is a concept I first came across a few years back when Lea Verou wrote an article on it. Multi-range sliders have sadly been removed from the spec since, but something else that has happened in the meanwhile is that CSS got better — and so have I, so I recently decided to make my own 2019 version.

In this two-part article, we’ll go through the how, step-by-step, first building an example with two thumbs, then identify the issues with it. We’ll solve those issues, first for the two-thumb case then, in part two, come up with a better solution for the multi-thumb case.

Note how the thumbs can pass each other and we can have any possible order, with the fills in between the thumbs adapting accordingly. Surprisingly, the entire thing is going to require extremely little JavaScript.

Article Series:

  1. Multi-Thumb Sliders: Particular Two-Thumb Case (This Post)
  2. Multi-Thumb Sliders: General Case

Basic structure

We need two range inputs inside a wrapper. They both have the same minimum and maximum value (this is very important because nothing is going to work properly otherwise), which we set as custom properties on the wrapper (--min and --max). We also set their values as custom properties (--a and --b).

- let min = -50, max = 50
- let a = -30, b = 20;

.wrap(style=`--a: ${a}; --b: ${b}; --min: ${min}; --max: ${max}`)
  input#a(type='range' min=min value=a max=max)
  input#b(type='range' min=min value=b max=max)

This generates the following markup:

Accessibility considerations

We have two range inputs and they should probably each have a , but we want our multi-thumb slider to have a single label. How do we solve this issue? We can make the wrapper a

, use its

to describe the entire multi-thumb slider, and have a that’s only visible to screen readers for each of our range inputs. (Thanks to Zoltan for this great suggestion.)

But what if we want to have a flex or grid layout on our wrapper? That’s something we probably want, as the only other option is absolute positioning and that comes with its own set of issues. Then we run into a Chromium issue where

cannot be a flex or grid container.

To go around this, we use the following ARIA equivalent (which I picked up from this post by Steve Faulkner):

- let min = -50, max = 50
- let a = -30, b = 20;

.wrap(role='group' aria-labelledby='multi-lbl' style=`--a: ${a}; --b: ${b}; --min: ${min}; --max: ${max}`)
  #multi-lbl Multi thumb slider:
  label.sr-only(for='a') Value A:
  input#a(type='range' min=min value=a max=max)
  label.sr-only(for='b') Value B:
  input#b(type='range' min=min value=b max=max)

The generated markup is now:

Multi thumb slider:

If we set an aria-label or an aria-labelledby attribute on an element, we also need to give it a role.

Basic styling

We make the wrapper a middle-aligned grid with two rows and one column. The bottom grid cell gets the dimensions we want for the slider, while the top one gets the same width as the slider, but can adjust its height according to the group label’s content.

$w: 20em;
$h: 1em;

.wrap {
  display: grid;
  grid-template-rows: max-content $h;
  margin: 1em auto;
  width: $w;

To visually hide the elements, we absolutely position them and clip them to nothing:

.wrap {
  // same as before
  overflow: hidden; // in case 

Some people might shriek about clip-path support, like how using it cuts out pre-Chromium Edge and Internet Explorer, but it doesn’t matter in this particular case! We’re getting to the why behind that in a short bit.

We place the sliders, one on top of the other, in the bottom grid cell:

input[type='range'] {
  grid-column: 1;
  grid-row: 2;

See the Pen by thebabydino (@thebabydino) on CodePen.

We can already notice a problem however: not only does the top slider track show up above the thumb of the bottom one, but the top slider makes it impossible for us to even click and interact with the bottom one using a mouse or touch.

In order to fix this, we remove any track backgrounds and borders and highlight the track area by setting a background on the wrapper instead. We also set pointer-events: none on the actual elements and then revert to auto on their thumbs.

@mixin track() {
  background: none; /* get rid of Firefox track background */
  height: 100%;
  width: 100%;

@mixin thumb() {
  background: currentcolor;
  border: none; /* get rid of Firefox thumb border */
  border-radius: 0; /* get rid of Firefox corner rounding */
  pointer-events: auto; /* catch clicks */
  width: $h; height: $h;

.wrap {
  /* same as before */
  background: /* emulate track with wrapper background */ 
    linear-gradient(0deg, #ccc $h, transparent 0);

input[type='range'] {
  &::-webkit-slider-thumb, & { -webkit-appearance: none; }
  /* same as before */
  background: none; /* get rid of white Chrome background */
  color: #000;
  font: inherit; /* fix too small font-size in both Chrome & Firefox */
  margin: 0;
  pointer-events: none; /* let clicks pass through */
  &::-webkit-slider-runnable-track { @include track; }
  &::-moz-range-track { @include track; }
  &::-webkit-slider-thumb { @include thumb; }
  &::-moz-range-thumb { @include thumb; }

Note that we’ve set a few more styles on the input itself as well as on the track and thumb in order to make the look consistent across the browsers that support letting clicks pass through the actual input elements and their tracks, while allowing them on the thumbs. This excludes pre-Chromium Edge and IE, which is why we haven’t included the -ms- prefix — there’s no point styling something that wouldn’t be functional in these browsers anyway. This is also why we can use clip-path to hide the elements.

If you’d like to know more about default browser styles in order to understand what’s necessary to override here, you can check out this article where I take an in-depth look at range inputs (and where I also detail the reasoning behind using mixins here).

See the Pen by thebabydino (@thebabydino) on CodePen.

Alright, we now have something that looks functional. But in order to really make it functional, we need to move on to the JavaScript!


The JavaScript is pretty straightforward. We need to update the custom properties we’ve set on the wrapper. (For an actual use case, they’d be set higher up in the DOM so that they’re also inherited by the elements whose styles that depend on them.)

addEventListener('input', e => {
  let _t = e.target;
  _t.parentNode.style.setProperty(`--${_t.id}`,  _t.value)
}, false);

See the Pen by thebabydino (@thebabydino) on CodePen.

However, unless we bring up DevTools to see that the values of those two custom properties really change in the style attribute of the wrapper .wrap, it’s not really obvious that this does anything. So let’s do something about that!

Showing values

Something we can do to make it obvious that dragging the thumbs actually changes something is to display the current values. In order to do this, we use an output element for each input:

- let min = -50, max = 50
- let a = -30, b = 20;

.wrap(role='group' aria-labelledby='multi-lbl' style=`--a: ${a}; --b: ${b}; --min: ${min}; --max: ${max}`)
  #multi-lbl Multi thumb slider:
  label.sr-only(for='a') Value A:
  input#a(type='range' min=min value=a max=max)
  output(for='a' style='--c: var(--a)')
  label.sr-only(for='b') Value B:
  input#b(type='range' min=min value=b max=max)
  output(for='b' style='--c: var(--b)')

The resulting HTML looks as follows:

Multi thumb slider:

We display the values in an ::after pseudo-element using a little counter trick:

output {
  &::after {
    counter-reset: c var(--c);
    content: counter(c);

See the Pen by thebabydino (@thebabydino) on CodePen.

It’s now obvious these values change as we drag the sliders, but the result is ugly and it has messed up the wrapper background alignment, so let’s add a few tweaks! We could absolutely position the elements, but for now, we simply squeeze them in a row between the group label and the sliders:

.wrap {
  // same as before
  grid-template: repeat(2, max-content) #{$h}/ 1fr 1fr;

[id='multi-lbl'] { grid-column: 1/ span 2 }

input[type='range'] {
  // same as before
  grid-column: 1/ span 2;
  grid-row: 3;

output {
  grid-row: 2;
  &:last-child { text-align: right; }
  &::after {
    content: '--' attr(for) ': ' counter(c) ';'
    counter-reset: c var(--c);

Much better!

See the Pen by thebabydino (@thebabydino) on CodePen.

Setting separate :focus styles even gives us something that doesn’t look half bad, plus allows us to see which value we’re currently modifying.

input[type='range'] {
  /* same as before */
  z-index: 1;

  &:focus {
    z-index: 2;
    outline: dotted 1px currentcolor;
    &, &   output { color: darkorange }

See the Pen by thebabydino (@thebabydino) on CodePen.

All we need now is to create the fill between the thumbs.

The tricky part

We can recreate the fill with an ::after pseudo-element on the wrapper, which we place on the bottom grid row where we’ve also placed the range inputs. This pseudo-element comes, as the name suggests, after the inputs, but it will still show up underneath them because we’ve set positive z-index values on them. Note that setting the z-index works on the inputs (without explicitly setting their position to something different from static) because they’re grid children.

The width of this pseudo-element should be proportional to the difference between the higher input value and the lower input value. The big problem here is that they pass each other and we have no way of knowing which has the higher value.

First approach

My first idea on how to solve this was by using width and min-width together. In order to better understand how this works, consider that we have two percentage values, --a and --b, and we want to make an element’s width be the absolute value of the difference between them.

Either one of the two values can be the bigger one, so we pick an example where --b is bigger and an example where --a is bigger:

We set width to the second value (--b) minus the first (--a) and min-width to the first value (--a) minus the second one (--b).

div {
  background: #f90;
  height: 4em;
  min-width: calc(var(--a) - var(--b));
  width: calc(var(--b) - var(--a));

If the second value (--b) is bigger, then the width is positive (which makes it valid) and the min-width negative (which makes it invalid). That means the computed value is the one set via the width property. This is the case in the first example, where --b is 70% and --a is 50%. That means the width computes to 70% - 50% = 20%, while the min-width computes to 50% - 70% = -20%.

If the first value is bigger, then the width is negative (which makes it invalid) and the min-width is positive (which makes it valid), meaning the computed value is that set via the min-width property. This is the case in the second example, where --a is 80% and --b is 30%, meaning the width computes to 30% - 80% = -50%, while the min-width computes to 80% - 30% = 50%.

See the Pen by thebabydino (@thebabydino) on CodePen.

Applying this solution for our two thumb slider, we have:

.wrap {
  /* same as before */
  --dif: calc(var(--max) - var(--min));
  &::after {
    content: '';
    background: #95a;
    grid-column: 1/ span 2;
    grid-row: 3;
    min-width: calc((var(--a) - var(--b))/var(--dif)*100%);
    width: calc((var(--b) - var(--a))/var(--dif)*100%);

In order to represent the width and min-width values as percentages, we need to divide the difference between our two values by the difference (--dif) between the maximum and the minimum of the range inputs and then multiply the result we get by 100%.

See the Pen by thebabydino (@thebabydino) on CodePen.

So far, so good… so what?

The ::after always has the right computed width, but we also need to offset it from the track minimum by the smaller value and we can’t use the same trick for its margin-left property.

My first instinct here was to use left, but actual offsets don’t work on their own. We’d have to also explicitly set position: relative on our ::after pseudo-element in order to make it work. I felt kind of meh about doing that, so I opted for margin-left instead.

The question is what approach can we take for this second property. The one we’ve used for the width doesn’t work because there is no such thing as min-margin-left.

A min() function is now in the CSS spec, but at the time when I coded these multi-thumb sliders, it was only implemented by Safari (it has since landed in Chrome as well). Safari-only support was not going to cut it for me since I don’t own any Apple device or know anyone in real life who does… so I couldn’t play with this function! And not being able to come up with a solution I could actually test meant having to change the approach.

Second approach

This involves using both of our wrapper’s (.wrap) pseudo-elements: one pseudo-element’s margin-left and width being set as if the second value is bigger, and the other’s set as if the first value is bigger.

With this technique, if the second value is bigger, the width we’re setting on ::before is positive and the one we’re setting on ::after is negative (which means it’s invalid and the default of 0 is applied, hiding this pseudo-element). Meanwhile, if the first value is bigger, then the width we’re setting on ::before is negative (so it’s this pseudo-element that has a computed width of 0 and is not being shown in this situation) and the one we’re setting on ::after is positive.

Similarly, we use the first value (--a) to set the margin-left property on the ::before since we assume the second value --b is bigger for this pseudo-element. That means --a is the value of the left end and --b the value of the right end.

For ::after, we use the second value (--b) to set the margin-left property, since we assume the first value --a is bigger this pseudo-element. That means --b is the value of the left end and --a the value of the right end.

Let’s see how we put it into code for the same two examples we previously had, where one has --b bigger and another where --a is bigger:

div {
  &::before, &::after {
    content: '';
    height: 5em;
  &::before {
    margin-left: var(--a);
    width: calc(var(--b) - var(--a));

  &::after {
    margin-left: var(--b);
    width: calc(var(--a) - var(--b));

See the Pen by thebabydino (@thebabydino) on CodePen.

Applying this technique for our two thumb slider, we have:

.wrap {
  /* same as before */
  --dif: calc(var(--max) - var(--min));
  &::before, &::after {
    grid-column: 1/ span 2;
    grid-row: 3;
    height: 100%;
    background: #95a;
    content: ''
  &::before {
    margin-left: calc((var(--a) - var(--min))/var(--dif)*100%);
    width: calc((var(--b) - var(--a))/var(--dif)*100%)
  &::after {
    margin-left: calc((var(--b) - var(--min))/var(--dif)*100%);
    width: calc((var(--a) - var(--b))/var(--dif)*100%)

See the Pen by thebabydino (@thebabydino) on CodePen.

We now have a nice functional slider with two thumbs. But this solution is far from perfect.


The first issue is that we didn’t get those margin-left and width values quite right. It’s just not noticeable in this demo due to the thumb styling (such as its shape, dimensions relative to the track, and being full opaque).

But let’s say our thumb is round and maybe even smaller than the track height:

See the Pen by thebabydino (@thebabydino) on CodePen.

We can now see what the problem is: the endlines of the fill don’t coincide with the vertical midlines of the thumbs.

This is because of the way moving the thumb end-to-end works. In Chrome, the thumb’s border-box moves within the limits of the track’s content-box, while in Firefox, it moves within the limits of the slider’s content-box. This can be seen in the recordings below, where the padding is transparent, while the content-box and the border are semi-transparent. We’ve used orange for the actual slider, red for the track and purple for the thumb.

Animated gif. Chrome only moves the thumb within the left and right limits of the track's content-box.
Recording of the thumb motion in Chrome from one end of the slider to the other.

Note that the track’s width in Chrome is always determined by that of the parent slider – any width value we may set on the track itself gets ignored. This is not the case in Firefox, where the track can also be wider or narrower than its parent . As we can see below, this makes it even more clear that the thumb’s range of motion depends solely on the slider width in this browser.

Animated gif. Firefox moves the thumb within the left and right limits of the actual range input's content-box.
Recording of the thumb motion in Firefox from one end of the slider to the other. The three cases are displayed from top to bottom. The border-box of the track perfectly fits the content-box of the slider horizontally. It’s longer and it’s shorter).

In our particular case (and, to be fair, in a lot of other cases), we can get away with not having any margin, border or padding on the track. That would mean its content-box coincides to that of the actual range input so there are no inconsistencies between browsers.

But what we need to keep in mind is that the vertical midlines of the thumbs (which we need to coincide with the fill endpoints) move between half a thumb width (or a thumb radius if we have a circular thumb) away from the start of the track and half a thumb width away from the end of the track. That’s an interval equal to the track width minus the thumb width (or the thumb diameter in the case of a circular thumb).

This can be seen in the interactive demo below where the thumb can be dragged to better see the interval its vertical midline (which we need to coincide with the fill’s endline) moves within.

See the Pen by thebabydino (@thebabydino) on CodePen.

The demo is best viewed in Chrome and Firefox.

The fill width and margin-left values are not relative to 100% (or the track width), but to the track width minus the thumb width (which is also the diameter in the particular case of a circular thumb). Also, the margin-left values don’t start from 0, but from half a thumb width (which is a thumb radius in our particular case).

$d: .5*$h; // thumb diameter
$r: .5*$d; // thumb radius
$uw: $w - $d; // useful width

.wrap {
  /* same as before */
  --dif: calc(var(--max) - var(--min));
  &::before {
    margin-left: calc(#{$r}   (var(--a) - var(--min))/var(--dif)*#{$uw});
    width: calc((var(--b) - var(--a))/var(--dif)*#{$uw});
  &::after {
    margin-left: calc(#{$r}   (var(--b) - var(--min))/var(--dif)*#{$uw});
    width: calc((var(--a) - var(--b))/var(--dif)*#{$uw});

Now the fill starts and ends exactly where it should, along the midlines of the two thumbs:

See the Pen by thebabydino (@thebabydino) on CodePen.

This one issue has been taken care of, but we still have a way bigger one. Let’s say we want to have more thumbs, say four:

Animated gif. Shows a slider with four thumbs which can pass each other and be in any order, while the fills are always between the two thumbs with the two smallest values and between the two thumbs with the two biggest values, regardless of their order in the DOM.
An example with four thumbs.

We now have four thumbs that can all pass each other and they can be in any order that we have no way of knowing. Moreover, we only have two pseudo-elements, so we cannot apply the same techniques. Can we still find a CSS-only solution?

Well, the answer is yes! But it means scrapping this solution and going for something different and way more clever — in part two of this article!

Article Series:

  1. Multi-Thumb Sliders: Particular Two-Thumb Case (This Post)
  2. Multi-Thumb Sliders: General Case (Coming Tomorrow!)

Du Hoang

In Virtual Reality, user interfaces do not exist in a void but within a 3D environment. When designing user interfaces for VR, UI designers have to stretch our skillset to include environment design, because now the UI is an extension of the environment or vice versa.

It can be a daunting task for traditional UX/UI designers to create a 3D scene because it involves completely different tools and domain knowledge than what we were trained for. But fear not UI designers, there are ways to make an environment for your Virtual Reality UI without the need for 3D models or fancy shader graphs. And that is by creating a 360 image, or skybox.

You might have seen 360 photos on Facebook or elsewhere. Viewing a 360 photo is like being transported to that location. And a 360 photo is, just like any other photo, a 2-dimensional image. If you know how to edit images (via Photoshop or other means), then you know how to create 360 photos/skyboxes.

Source: Flickr

You might also have noticed that 360 photos, a.k.a. equirectangular images, are severely distorted. And that both the front and the back of a scene are displayed side-by-side. How does it all work?

An equirectangular image is a type of projection of a 2D image onto a 3D sphere. Imagine printing out a map of the Earth, and then wrapping it around a globe.

In a 3D scene, an equirectagular image can be use as the background image, or skybox, that surrounds the scene. There are other types of skybox projections with their own pros and cons, but for this tutorial we will focus on creating an equirectangular image.

The simplest way to make an equirectangular image is to take a 360 photo with a camera. If you don’t own a 360 camera then most mobile phones can also do the job. Simply go to the location of your choice, snap a 360 photo, and you are done!

But what if you want to create a futuristic scene, or something else that doesn’t exist altogether?

Well, since equirectangular images are just 2D images, you can try to Photoshop/illustrate one, distortions and all. Here is a guide on how to do just that by a talented individual: Drawing a Spherical Panorama.

I personally do not recommend manually drawing equirectangular images. While it is a fun exercise, it takes an expert illustrator to do a convincing job, and no matter how great of an illustrator you are, you will never be able to reproduce the required distortions perfectly.

The easier way to go is to draw/Photoshop images that make up the six directions and put it into a 3D software, then take a 360 screenshot of that scene.

To create a skybox using images alone, you will need Photoshop or equivalent, and a 3D program. In our case we will use Unity.

For this example, I want to create a cartoony environment with clouds and stars against a bright sky. The ground will be a sphere with a cute tree growing out of it.

To start, we make square illustrations of the different objects in the scene with Photoshop, and save them as transparent PNGs.

2D illustrations of the objects in our 3D scene

Next, we create a new 3D project in Unity and import our PNGs. For each illustration we want to create a new Material, set the Shader type to be Unlit/Transparent and assign the PNG as texture.

In our scene, we’ll make a Quad (a square geometry). Go to Create > 3D Object > Quad, and assign the Material to it, by dragging and dropping the Material on top of the Quad. If done correctly, we should see the illustration displayed on the Quad with proper transparency.

Selecting the MainCamera, we need to reposition it at the point of origin (x: 0, y: 0, z: 0). Then we make more Quads of the various objects, place them around the MainCamera, facing it by rotating the Quad to be perpendicular to the camera, and compose our environment in that manner.

Once you have constructed an environment to your liking, you can take a 360 screenshot of it using this Unity plugin: Sky Capture or similar free assets. The result is a perfectly distorted equirectangular image that can be used as a backdrop for your VR applications and prototypes.

Resulting equirectangular image of our skybox

While our example is a cartoony skybox, you can also create photorealistic skyboxes using the same technique. The limit is your imagination!


It was a late evening.

My colleague has just checked in the code that they’ve been writing all week. We were working on a graphics editor canvas, and they implemented the ability to resize shapes like rectangles and ovals by dragging small handles at their edges.

The code worked.

But it was repetitive. Each shape (such as a rectangle or an oval) had a different set of handles, and dragging each handle in different directions affected the shape’s position and size in a different way. If the user held Shift, we’d also need to preserve proportions while resizing. There was a bunch of math.

The code looked something like this:

let Rectangle = {
  resizeTopLeft(position, size, preserveAspect, dx, dy) {
  resizeTopRight(position, size, preserveAspect, dx, dy) {
  resizeBottomLeft(position, size, preserveAspect, dx, dy) {
  resizeBottomRight(position, size, preserveAspect, dx, dy) {

let Oval = {
  resizeLeft(position, size, preserveAspect, dx, dy) {
  resizeRight(position, size, preserveAspect, dx, dy) {
  resizeTop(position, size, preserveAspect, dx, dy) {
  resizeBottom(position, size, preserveAspect, dx, dy) {

let Header = {
  resizeLeft(position, size, preserveAspect, dx, dy) {
  resizeRight(position, size, preserveAspect, dx, dy) {

let TextBlock = {
  resizeTopLeft(position, size, preserveAspect, dx, dy) {
  resizeTopRight(position, size, preserveAspect, dx, dy) {
  resizeBottomLeft(position, size, preserveAspect, dx, dy) {
  resizeBottomRight(position, size, preserveAspect, dx, dy) {

That repetitive math was really bothering me.

It wasn’t clean.

Most of the repetition was between similar directions. For example, Oval.resizeLeft() had similarities with Header.resizeLeft(). This was because they both dealt with dragging the handle on the left side.

The other similarity was between the methods for the same shape. For example, Oval.resizeLeft() had similarities with the other Oval methods. This was because they all dealt with ovals. There was also some duplication between Rectangle, Header, and TextBlock because text blocks were rectangles.

I had an idea.

We could remove all duplication by grouping the code like this instead:

let Directions = {
  top(...) {
  left(...) {
  bottom(...) {
  right(...) {

let Shapes = {
  Oval(...) {
  Rectangle(...) {

and then composing their behaviors:

let {top, bottom, left, right} = Directions;

function createHandle(directions) {

let fourCorners = [
  createHandle([top, left]),
  createHandle([top, right]),
  createHandle([bottom, left]),
  createHandle([bottom, right]),
let fourSides = [
let twoSides = [

function createBox(shape, handles) {

let Rectangle = createBox(Shapes.Rectangle, fourCorners);
let Oval = createBox(Shapes.Oval, fourSides);
let Header = createBox(Shapes.Rectangle, twoSides);
let TextBox = createBox(Shapes.Rectangle, fourCorners);

The code is half the total size, and the duplication is gone completely! So clean. If we want to change the behavior for a particular direction or a shape, we could do it in a single place instead of updating methods all over the place.

It was already late at night (I got carried away). I checked in my refactoring to master and went to bed, proud of how I untangled my colleague’s messy code.

The Next Morning

… did not go as expected.

My boss invited me for a one-on-one chat where they politely asked me to revert my change. I was aghast. The old code was a mess, and mine was clean!

I begrudginly complied, but it took me years to see they were right.

It’s a Phase

Obsessing with “clean code” and removing duplication is a phase many of us go through. When we don’t feel confident in our code, it is tempting to attach our sense of self-worth and professional pride to something that can be measured. A set of strict lint rules, a naming schema, a file structure, a lack of duplication.

You can’t automate removing duplication, but it does get easier with practice. You can usually tell whether there’s less or more of it after every change. As a result, removing duplication feels like improving some objective metric about the code. Worse, it messes with people’s sense of identity: “I’m the kind of person who writes clean code”. It’s as powerful as any sort of self-deception.

Once we learn how to create abstractions, it is tempting to get high on that ability, and pull abstractions out of thin air whenever we see repetitive code. After a few years of coding, we see repetition everywhere — and abstracting is our new superpower. If someone tells us that abstraction is a virtue, we’ll eat it. And we’ll start judging other people for not worshipping “cleanliness”.

I see now that my “refactoring” was a disaster in two ways:

  • Firstly, I didn’t talk to the person who wrote it. I rewrote the code and checked it in without their input. Even if it was an improvement (which I don’t believe anymore), this is a terrible way to go about it. A healthy engineering team is constantly building trust. Rewriting your teammate’s code without a discussion is a huge blow to your ability to effectively collaborate on a codebase together.

  • Secondly, nothing is free. My code traded the ability to change requirements for reduced duplication, and it was not a good trade. For example, we later needed many special cases and behaviors for different handles on different shapes. My abstraction would have to become several times more convoluted to afford that, whereas with the original “messy” version such changes stayed easy as cake.

Am I saying that you should write “dirty” code? No. I suggest to think deeply about what you mean when you say “clean” or “dirty”. Do you get a feeling of revolt? Righteousness? Beauty? Elegance? How sure are you that you can name the concrete engineering outcomes corresponding to those qualities? How exactly do they affect the way the code is written and modified?

I sure didn’t think deeply about any of those things. I thought a lot about how the code looked — but not about how it evolved with a team of squishy humans.

Coding is a journey. Think how far you came from your first line of code to where you are now. I reckon it was a joy to see for the first time how extracting a function or refactoring a class can make convoluted code simple. If you find pride in your craft, it is tempting to pursue cleanliness in code. Do it for a while.

But don’t stop there. Don’t be a clean code zealot. Clean code is not a goal. It’s an attempt to make some sense out of the immense complexity of systems we’re dealing with. It’s a defense mechanism when you’re not yet sure how a change would affect the codebase but you need guidance in a sea of unknows.

Let clean code guide you. Then let it go.


Posted Jan 11, 2020

If you know anything about web accessibility, then you probably already know that all elements need to have a (or failing that, an aria-supplied label).

What you may not know is what to do when you have something like this:

What do you have for sale? Radio button one says 'spam', radio button two says 'more spam'.

The radio buttons have… two labels. How do you have two labels?

2 labels 1 input

Now you might be forgiven for thinking that the way to solve this problem is as simple as it sounds:

Unfortunately this doesn’t work. According to the spec, a may contain:

Phrasing content, but with no descendant labelable elements unless it is the element’s labeled control, and no descendant label elements.

You also can’t do this:

Unfortunately, the for attribute only takes a single ID.

Great. So now what?

to the rescue

Enter the dynamic duo:



You might already be familiar with the

element as a way to group form controls and labels—that’s the way it’s described on the MDN page—but unfortunately that description is rather disappointingly vague and doesn’t really explain what

is meant for (more on that later). After all, a

groups things together as well—so what’s the big deal aside from the special default styling?

Well, when paired with its sidekick, the

element, it helps screen reader users by providing that second label that we need. Consider the following code:

What do you have for sale?

When a screen reader encounters this code, it will announce the legend to the user along with the label.

Note 1: As per the spec, the

element must be the first child of the


Note 2: screen readers handle this information in slightly different ways though. JAWS will announce the legend for every input in the fieldset, whereas NVDA only announces it upon entering the fieldset. This latter point is something to keep in mind as you design your form, since NVDA users may not realize once they’ve exited the fieldset.

But wait, there’s more!

I mentioned earlier that fieldset elements are meant for something more than just grouping related form controls. Well, strictly speaking that’s true, but we should be more specific about what we mean by “related”. After all, you could argue that most form inputs are related—that’s why they’re part of the same form. So, more accurately, we should say that fieldsets are meant for grouping related form controls that would be unclear or confusing if not grouped.

So what do I mean by that? Well consider a checkout form that asks for both a billing address and a shipping address:

Two sets of address inputs, one for billing address, the other for shipping address.

As a sighted user, I can easily distinguish which “street” input goes with which address, but for a screen reader user this could get confusing in a hurry. So, once again, here comes our superhero element friends to help us out:

Billing Address
Shipping Address

As with before, this may not be perfect for NVDA users due to the way it chooses to handle the legend, but it’s a heck of a lot better than nothing.

What if I don’t want to use



Actually there is a reason you may want to set aside the first rule of ARIA, and not use these elements. Remember how we mentioned earlier that


have default styling? Yeah… about that… it turns out they have very special default styling. This can make it rather difficult to work with, as you usually have to resort to floating and some funky margins to get the result you want.

So if we want to avoid that nonsense then we can achieve a similar result by doing this:

What if I don’t like groups, either?

You know, because you like making life difficult. [sigh]

If you’re unable to wrap your fields in a grouping like this, there are some other ways around the problem. Just note that the drawback to these approaches is that things are going to get a little verbose for your screen reader users, since the full text will be announced for every input (as opposed to the

or group options, where the verbosity is at the discretion of the screen reader software).

For this route, your options are…

Use hidden text

Some aria-labelledby magic on the inputs themselves

Here we can take advantage of the fact that, unlike the for attribute, aria-labelledby can take a string-delimited list of IDs, instead of just a single one.

Billing Address

A good, ol’-fashioned aria-label

Not sure that aria-label qualifies as old-fashioned, but it makes for a good heading. Just be sure that your aria-label includes the same text as the visible label, ideally at the start. This is to ensure that people who use voice control software can focus the input by name.

Wrapping Up

So there you have it. A whole mess of options for ensuring that your form fields are properly identified for everyone.


This collection of social media marketing and new hire announcements is a compilation of the past week’s briefs from our daily Marketing Land newsletter. Click here to subscribe and get more news like this delivered to your inbox every morning.

Facebook Shorts: New Page admin tools, update on political ad policies and the Teen Vogue gaffe

A Page Management History feature for admins. Social media consultant Matt Navarra spotted a new Facebook Pages feature called Page Management History that shows all actions taken on a Page, when they were made and who made them. The feature is only visible to people who help manage your Page. Admins can click on the “Settings” tab at the top of the Page and then click “Page Management History” link in the left column. The history goes back to Nov. 1, 2019, and can be downloaded to your business history in Business Manager. 

The latest iteration of Facebook’s political ad policies. Facebook has once again updated its stance on political ads. A new control will allow users to see fewer political and social issue ads in their Facebook and Instagram feeds. From their “Ad Preferences,” users can opt to see fewer ads on topics or interests. Facebook also updated search filters in Ad Library, making it possible to search for ads via exact phrases and adding several new filters to better analyze results (audience size, dates and regions reached). 

Teen Vogue and Facebook’s sponsored content fiasco. In case you missed it, Teen Vogue published a story titled “How Facebook Is Helping Ensure the Integrity of the 2020 Election” without a byline — or a sponsored content label. Criticism flooded in, and Teen Vogue added a “sponsored editorial content” label and byline from a Teen Vogue contributor who said she knew nothing about it. Facebook denied – then acknowledged – it was part of a sponsored package. Teen Vogue eventually removed the piece and issued a statement: “We made a series of errors labeling this piece, and we apologize for any confusion this may have caused. We don’t take our audience’s trust for granted, and ultimately decided that the piece should be taken down entirely to avoid further confusion.” 

Twitter is getting rid of its Audience Insights tab, Facebook reminds API advertisers of Special Ad Category

Twitter says goodbye to Audience Insights. Twitter has confirmed that it will be removing the Audience Insights tab from Twitter Analytics at the end of January. Audience Insights initially launched back in 2015 to give users more in-depth data about their followers, including demographic profiles, purchase behavior insights, mobile device usage stats and more. Now, when users go to view Insights in Twitter Analytics, they will see a notification that the feature is being removed. For now, Twitter has yet to provide further information about a possible tool alternative, though it’s worth noting that Twitter added a new ‘Conversation Insights’ feature to Media Studio in November last year.

Facebook’s gentle ad policies reminder. Last year, Facebook introduced a new policy for U.S.-based advertisers running ads in its “Special Ad Category,” which includes housing, employment or credit opportunities. By February 11, 2020, Facebook will require all U.S. advertisers to identify any active campaigns that belong to a Special Ad Category that were created before Dec. 4, 2019. Advertisers will also need to update targeting settings for such campaigns or the ads will no longer be allowed to run. By March 31, all U.S. businesses creating new ads in the housing, employment, or credit categories must specify the Special Ad Category and update targeting settings to comply with restrictions for Special Ad Category campaigns. All businesses creating ads that do not offer housing, employment, or credit opportunities must indicate “None” in the Special Ad Category field for all campaigns, or the ads will no longer be allowed to run.

Facebook takes action against manipulated media, YouTube issues a policy reminder

Facebook policies on manipulated media. Amid growing concerns around deepfake technology, Facebook is formally taking steps to address and combat manipulated media with a new policy framework. Facebook’s approach has several components, including investigating AI-generated content for deceptive behaviors and teaming up with government and industry partners to expose people behind malicious efforts. Going forward, Facebook will remove misleading manipulated media if it meets the following criteria: Media has been edited or synthesized (beyond adjustments for clarity or quality) in ways that aren’t apparent and could mislead someone into thinking that a subject said words that they did not actually say; or, that the media is driven by AI or machine learning technology that “merges, replaces or superimposes content onto a video” in an attempt to make the final video appear original. 

YouTube’s data collection policy on kids’ content. On Monday, YouTube reminded creators that its new restrictions on data collection on kids’ videos is now coming into effect. The policy was initially announced in September, outlining measures that include removing data targeting from videos identified as being directed at children. In addition to adding a new audience setting in YouTube Studio to help creators indicate whether or not their content is made for kids, YouTube said it will “treat data from anyone watching children’s content on YouTube as coming from a child, regardless of the age of the user.” Essentially, YouTube will limit data collection on children’s content to the bare minimum needed to support “the operation of the service.” YouTube said it will also stop serving personalized ads on videos made for kids and will remove some of the features on kids’ content – such as comments and notifications.

Facebook adds new features to Instant Articles, Twitter rolls out a new research data hub

Facebook Instant Articles. Facebook has rolled out updates to Instant Articles for publishers, including a new recirculation and navigation surface, smarter CTA and ad placements, and support for Facebook Stories. Instant Articles first opened to publishers in 2016 to enable content to load quickly and be monetized within Facebook. The company said Monday Instant Articles have 3x faster load times and 30% more time spent than mobile web articles.

Twitter launches a research hub. Twitter has launched a new hub for academic researchers in an effort to provide more access to information and support around its APIs. The new page, dubbed “Twitter data for academic researchers,” is part of Twitter’s response based on feedback from the research community. The hub includes links to apply for a developer account to access Twitter’s different APIs with additional tools for researchers that cover data integration, analysis and hosting.

Snap buys the AI company it partnered with on Cameos, Facebook launches deepfake challenge

Snap acquires AI factory. Snapchat’s parent company, Snap, has quietly scooped up AI Factory – the computer vision startup that Snapchat collaborated with on its recently launched ‘Cameos’ video mode, according to a new report from TechCrunch. Snap confirmed the news but has not provided further comment on the financial terms, though Snap is believed to have closed the deal for around $166 million. While the details are still fuzzy, it’s likely that AI Factory will have a hand in expanding Snap’s AI investment with more interactive features and creative tools for Snapchat users. Stay tuned, marketers. 

Facebook’s deepfake detection challenge. In mid-December, Facebook launched a Deepfake Detection Challenge (DFDC) – an open initiative aimed at accelerating the development of new technologies for detecting deepfakes and manipulated media. In partnership with leaders in the industry and in academia, Facebook launched the challenge at the Conference on Neural Information Processing Systems (NeurIPS), providing entrants with a unique data set of 100,000-plus videos specially created to aid research on deepfakes. The goal of the challenge is to spur researchers around the world to build innovative new technologies that can help detect deepfakes and manipulated media, and results will be scored for effectiveness. Facebook is offering $500,000 as a first-place prize. The challenge is ongoing through March 2020. 

YouTube creators see drop in subscriber count. YouTube has confirmed that beginning around December 25 – 26, some YouTube Studio creators experienced a significant drop in subscribers and view counts, with one creator claiming 50% fewer views than average during the December timeframe. The video platform has since resolved the issue, citing a YouTube Analytics bug. Now, the subscriber count reflected in the YouTube Studio (on snapshot cards, reports, etc) has been corrected. According to YouTube, only the YouTube Studio dashboard was impacted by the issue, while channel pages remained unaffected.

About The Author

Taylor Peterson is Third Door Media’s Deputy Editor, managing industry-leading coverage that informs and inspires marketers. Based in New York, Taylor brings marketing expertise grounded in creative production and agency advertising for global brands. Taylor’s editorial focus blends digital marketing and creative strategy with topics like campaign management, emerging formats, and display advertising.


Last year saw continued growth and innovation in social media opportunities for brands with new features, content formats and dynamic creative capabilities designed to bring brands closer to their customers. For social marketers, a successful strategy involves tapping into audiences in compelling ways that can grab attention in passing moments.

Facebook and Snapchat launched new immersive formats, including augmented reality. Pinterest and Instagram ramped up shoppable post capabilities. Machine learning powers an ever-growing slate of platform functions. Here’s the rundown of top features and formats social marketers should explore in 2020.


Groups. Over the last few years, Facebook has been pushing hard to beef up its community-driven Groups feature, opening it to brands and publishers in 2017 and with a suite of settings and tools specifically for Groups creators. By July 2018, Facebook rolled out Watch Party for all Groups, a video feature that allows multiple users to watch and comment on the same video in a Facebook Group simultaneously. The following month, Facebook opened up its ads pixel to a limited number of Groups, and said it was planning a broader roll-out in the coming weeks.

It didn’t come as a surprise when, in early 2019, Facebook announced that it would make “Groups” a central focus of its platform. After overhauling the Groups tab with a redesign and a new slate of features, Facebook Groups saw tremendous growth in 2019, with users able to discover more relevant content, more easily. The Groups tab now shows a personalized feed of activity across all of your groups, and the discovery tool touts improved recommendations to let users quickly find groups of interest.

Facebook Groups offer an opportunity for brands willing to put the time and effort into building out a community via the platform. Not only do Facebook Groups provide direct engagement with followers, but they give brands using them more exposure on the app with posts and discussions from Groups visible in News Feed.

Messenger ads. Facebook originally launched Messenger ads in 2016, but the ad product has since undergone a series of upgrades and improvements that will make it a valuable channel in 2020. Messenger ads work like ads across other Facebook platforms to automatically deliver ads to the placement most likely to drive campaign results at the lowest cost. The best part? Advertisers can use the same creative for Messenger that’s already being used for Facebook and Instagram. Audiences will see these ads in the Chats tab in their Messenger app. When they tap on an ad, they’ll be sent to a detailed view within Messenger with a call-to-action that will take them to the destination you chose during ads creation—whether that’s your site, app, or a conversation with your business.

Click to Messenger ads. In October, Facebook released a new feature that allows advertisers to use Stories ads to start conversations in Messenger. Users can swipe up on Stories ads that have a “Send Message” call to action to start a conversation with the business in Messenger without leaving the app they’re in. And for businesses with multiple Facebook apps connected to the Messenger platform, the company is made it easier to select which app they want to use for their Click to Messenger ads. As social platforms and usage patterns trend to a more personalized, messaging experience, brands focused on one-on-one communication with consumers will have an advantage over the competition and a better chance at attracting a loyal following.

Instant Experience ads. Formerly known as Canvas ads, these full-screen takeover ads load instantly and are mobile-optimized, designed to capture the full attention of the audience. Within Instant Experience, users can watch engaging videos and photos, swipe through carousels, tilt to pan, and explore lifestyle images with tagged products all in a single ad experience. Facebook last year also added new metrics for Instant Experiences in Ads Manager: Instant Experiences Clicks to Open, Instant Experiences Clicks to Start and Instant Experiences Outbound Clicks. The new metrics give advertisers more insight into the key “drop-off” points within an ad and are available to campaigns implementing Instant Experiences and Lead Ads.

Dynamic ad formats. Dynamic Ads, launched in 2019, are Facebook’s machine-learning ad unit that delivers a personalized version of the ad to everyone who sees it, based on which ad types they are most likely to respond to. The format is available for any campaign that uses objectives for catalog sales, traffic, and conversions. According to a Facebook test, Dynamic Ads delivered an average of 34% improvement in incremental ROAS, 10% improvement in lift, and 6% lower cost per incremental purchase compared to carousel-only ads.

Creator Studio. Facebook originally launched the Creator Studio in 2018 as a centralized hub for publishers and creators managing video content. In 2019, the tool underwent a series of updates and improvements to help support brand and partnered monetization efforts. Enhancements to the Creator Studio included a dedicated Monetization Overview section, new audience and retention insights, and the ability to now manage Instagram posts and IGTV from within the Creator Studio. The tool itself gives marketers a one-stop-shop to post, schedule posts, manage, monetize, and measure content across Facebook Pages and Instagram accounts. Page admins will be able to schedule video content on the Instagram platforms up to six months in advance via the Creator Studio, and Facebook reports it is working on new drafting features for videos published to the Instagram Feed and IGTV. It’s likely that the Creator Studio will continue rolling out updates to support individuals and brands managing content video across Facebook properties.


Story ads. It’s been more than two years since Instagram first rolled out Story Ads, and the ad format has been a big win for the company, with Facebook CEO Mark Zuckerberg claiming that more than three million advertisers were running Story Ads across Instagram, Facebook and Messenger. And, according to a report from Kenshoo, Instagram Story Ads accounted for nearly 20% of ad spend on the platform during the second quarter of 2019. It’s no surprise Story Ads are performing well — Instagram reports 500 million accounts are using Stories on its platform. One-third of the most viewed Stories come from businesses, while 1 in 5 Stories are generated via direct message from viewers. In October 2019, Instagram confirmed it was experimenting with increasing the ad load in Stories, not surprising considering the activity Stories are generating.

Shoppable posts. Instagram first debuted its Checkout feature in March 2019, giving online retailers a more seamless way to connect with customers. The feature allows users to make purchases from in-feed or Story content without leaving the app. It works with Shopping on Instagram, enabling businesses to tag up to five products per image (or twenty products per carousel). The tags contain product details and pricing, giving shoppers a smoother path to purchase, and brands the ability to creatively showcase shoppable products. To create shoppable posts, businesses will need to create a Facebook Shop account to link to Instagram.

IGTV. In 2018, Instagram launched IGTV as a hub for watching and creating long-form video content, with a direct connection to the Instagram platform. Unlike Instagram Stories, which disappear after 24 hours, IGTV content occupies a permanent home on the platform, offering the ability to record videos up to an hour long. IGTV videos are largely vertical – and watching them feels similar to watching a Stories feed, giving video creators more opportunities to capture engaged audiences.

While there is a standalone app for IGTV, the majority of users access it on Instagram through either the IGTV tab or the “Explore” tab, which more than half a billion users visit every month, according to Instagram.


Hide replies to tweets. In November, Twitter began testing the ability for users to hide tweet replies – an effort to provide users with more control over managing conversations. Users can opt to hide replies to the tweets they create. However, users can still see and engage with hidden replies by selecting the “View hidden replies” option in the Tweet’s dropdown. For brands, the ability to hide Tweet replies could help give them more control over the context of their engagements – such as weeding out spam replies. On the other hand, hiding all replies could have a negative impact for users who rely on replies to find out more information about the product or topic discussed.

Revival of Lists. The longstanding Twitter feature underwent a series of upgrades in 2019, including a new Lists look (courtesy of the platform’s redesign in July), the option to follow specific topics as Lists, and the addition of a shortcut tab for Lists on the mobile app. Twitter said it plans to continue improving Lists in 2020 – an indication that the feature is becoming a core component of Twitter’s social offering. For marketers, the revamped Lists feature offers the ability to build and engage with a curated feed of content, created from specific topics or interests. Brands on Twitter can use Lists to curate community-driven conversations, keep tabs on competitors and stay up-to-date on industry trends.

Video focus. In April 2018, Twitter attributed more than half of its ad revenue to video ads. In 2019, Twitter reported that video ad formats continue to be its fastest-growing ad format. Twitter rolled out a new video ad bidding option in August, giving advertisers the option to run video ads up to 15-seconds long and only pay for ads viewed for a full six-seconds with pixels at 50% in-view. Twitter calls it a “flexible option for advertisers who care about the completed view metric, but are ready to… develop short-form assets optimized for in-feed viewing.”

The platform has also made sure creators and publishers are able to maximize video efforts with new tools and offerings to help drive performance. The “Timing is Everything” tool, launched in March, shows an aggregate view of when users are watching an account’s videos. In April, Twitter announced exclusive media partnerships with a handful of news and entertainment organizations (Univision, MTV, Wall Street Journal – among others) to bring more video content to the platform and attract video advertisers. The media partners were hard at work on Twitter in 2019, creating video content for news, tech, politics, music and sports – all aimed at appealing to a wide spectrum of brands.


Augmented reality (AR). Snap has been an AR leader among social platforms, getting marketers to consider AR in their strategies. At its annual partner summit in April, Snapchat rolled out significant creative updates to support new AR experiences. The company introduced enhanced AR capabilities, including dynamic scanning, improved movement tracking, interactive templates via Lens Studio, landmark manipulation, and object scanning. By October 2019, the company reported that its daily active users (DAU) interact with AR features nearly 30 times every day, on average.

If that isn’t enough to pique your interest in the platform’s AR capabilities, Snapchat has acquired AI Factory – the computer vision startup that Snapchat collaborated with on its recently launched ‘Cameos’ video mode. It’s likely AI Factory technology will be used in developing more interactive features and creative tools for Snapchat users and marketers.

Paramount Picture recently created an AR-driven UGC campaign with Snapchat’s Cameos feature. As AR becomes more established, first-mover brands will be a step ahead in driving deeper engagements with audiences.

Dynamic ads. As with the dynamic ad capabilities of Facebook, Google, and Pinterest, Snapchat’s Dynamic Ads option reduces the amount of time and effort it takes to create and maintain product ads. In October 2019, Snapchat began testing the dynamic format, which automatically creates and updates product ads to run on the app. Dynamic ads are available in a wide array of templates for retail, e-commerce and DTC brands to upload their product catalogs to the platform. As product details such as price or availability are updated, Dynamic Ads adjust accordingly. Snapchat’s Dynamic Ads give the platform room to elbow its way into social commerce ad budgets that are largely allocated to Facebook and Instagram. For commerce advertisers targeting younger audiences, Snapchat’s dynamic ads are particularly appealing, as Gen Z and Millennials account for more than 75% of users aged 13 to 34.


Original content. YouTube lifted its Premium paywall on original content in May, making YouTube Original series, movies, and live events free with ads to all viewers. Soon after, the Interactive Advertising Bureau (IAB) reported that more than half of digital video ad spend in 2019 would come from video ads featuring original content. While video is, without question, a highly effective format, advertisers should also embrace the idea of creating original video content that caters to the varied interests of their brand’s audience. YouTube aside, the industry’s continued investment in original content proves that it can do what stock or mass-produced content can’t – which is deliver deeper value for engaged consumers interested in the brand’s offering.

15-second non-skippable ads. YouTube made its 15-second non-skippable video ads available to all advertisers at the beginning of 2019. Before that, the ad unit was only open to advertisers buying through the YouTube reservation process and its premium Google Preferred network. In addition to the classic TruView ad unit, the option to buy traditional, non-skippable ads can be valuable for branding and reach and for advertisers who want to re-purpose existing creative designed for television and other non-skippable environments. With more viewers turning to the big screen TV to stream content through YouTube and other services, the non-skippable ad unit gives advertisers a simple path to target eyeballs at home from the living room.

Intelligent ad tools. In an effort to make advertising easier and more accessible to a broader range of advertisers, YouTube last year rolled out a range of machine learning-powered tools to do just that. Google made Discovery ad inventory (the native ads that appear in Google feed environments) available to advertisers on YouTube. With the Discovery ad unit, advertisers can upload individual creative assets, which Google then uses to churn out ad combinations based on the desired outcome.

YouTube also put machine learning to the test with an intelligent video ad editing tool dubbed the Bumper Machine. The tool automatically creates six-second bumper ads from existing video ads that run 90-seconds or less. It identifies well-structured clips in longer videos and converts them into multiple six-second video ads. For advertisers with limited resources, YouTube’s continued investment in intuitive, self-serve ad tools makes it easier to create and deploy visual campaigns across the platform.


Visual search. Pinterest has been on a slow journey to bring more e-commerce capabilities to its platform, but its new deep learning-powered visual search tool, launched in June, gives brands a major opportunity to be seen by consumers on their discovery journey. Dubbed “Complete The Look,” the tool makes it possible for the platform to recommend fashion and home decor products based on the context and attributes of all objects within an image a user searches for or saves. As a result, retail brands can gain more exposure on Pinterest as users browse for looks to aesthetically complement their initial search.

Catalogs. In March, Pinterest announced a new way to showcase products with the launch of Catalogs, allowing brands to upload multiple product images, organize the products by category and turn the images into dynamic Product Pins. ‘Catalogs’ makes it possible for merchants to create a full product catalog on the site as long as they have claimed a domain on Pinterest. Catalogs can then be used to generate product Pins in bulk and organize the Pins by product groups. After the launch of Catalogs, Pinterest added dedicated sections for products from specific retailers, giving shoppers the opportunity to see more from certain brands.

Self-serve shopping ads. Pinterest made Shopping Ads available via Ads Manager in August, giving advertisers the ability to launch Shopping campaigns via the self-serve interface. With shopping ads, marketers are also able to tag products from ‘Shop the Look‘ posts to create shoppable Product Pins. The company said the ad format was inspired by its organic Shop the Look pins, which rolled out in 2018 to give businesses the ability to tag up to 25 items in a single image.


E-commerce capabilities. The short-form video app started testing shoppable video posts in 2019, making it possible for influencers on the platform to place social commerce URLs within posts. TikTok officially confirmed the test to Adweek in November, but did not disclose details on when it would receive a wider roll-out. In 2020, advertisers should be aware that e-commerce opportunities will be coming to TikTok, and should consider if the platform’s short-lived content model could be valuable to the brand’s retail consumers. TikTok entering the social commerce space proves the platform is entertaining more ways to appeal to the commercial interests of creators and potential advertisers. With more than 500 million global users, TikTok is still largely an untapped e-commerce market of Gen Z shoppers – but likely won’t be for long.

Ephemeral video content. On the note of TikTok’s content model: Creating snackable, short-form content doesn’t mean recycling campaign video assets and uploading them to the platform. Where traditional video content uses a heavy production hand, ephemeral forms of video don’t have that requirement – meaning brands can readily create low-budget, high-value content with an entertainment angle for younger audiences. While TikTok isn’t the first app to make a successful business model from short-lived video creations, the app has seen enormous growth over the last year – exceeding more than 1.5 billion downloads globally. That makes TikTok the third most downloaded non-gaming app of the year, behind WhatsApp (707 million installs) and Messenger (636 million). It ranks just above Facebook, which claims 587 million downloads, and Instagram, with 376 million. As TikTok continues to grow its presence in the U.S. market, social media marketers should be prepared for the next wave of ephemeral content.

More ways to share video. TikTok released its first software development kit (SDK) in November, allowing users to upload video content to the TikTok platform through third-party apps. The “Share to TikTok” SDK i’s the first kit the company introduced in its TikTok for Developers program. The SDK includes tools to help third-party apps and developers integrate with TikTok, allowing users to simply “Share to TikTok” from the editing panel of their favorite apps. TikTok named Adobe Premiere Rush as an initial integration partner, giving users the ability to edit using Adobe’s rich features and share instantly to TikTok. For marketers, the SDK brings more engagement to the video-sharing platform with a more accessible, one-click posting experience.

About The Author

Taylor Peterson is Third Door Media’s Deputy Editor, managing industry-leading coverage that informs and inspires marketers. Based in New York, Taylor brings marketing expertise grounded in creative production and agency advertising for global brands. Taylor’s editorial focus blends digital marketing and creative strategy with topics like campaign management, emerging formats, and display advertising.


The issue of customer retention comes up in nearly every conversation with colleagues in a service business or software company. As companies aspire to transition to an ongoing customer relationship and recurring revenue model (versus traditional transactional relationships with customers), retaining high-value customers is more important than ever.

After all, depending on what industry you’re in, acquiring a new customer can cost your company up to five times more than retaining an existing one. Moreover, increasing customer retention rates by just 5% can increase profits from 25 to 95%. These factors have helped to give rise to major investments in optimizing customer journeys and customer base marketing, as well as establishing new roles such as “customer success managers.” With so much at stake and so much focus on retention, why then, on average does the typical American business have a yearly churn rate of 15%? Why are so many companies losing customers?

Based on my experience, a major driver is that companies are often targeting the wrong customers to begin with. Year after year, we see companies set themselves up for expensive churn issues by trying to serve anyone and everyone who they can convince to choose their product or service, rather than focusing on their most valuable target customers. In my experience, this type of fire, then aim approach is one of the biggest drivers of churn, negatively impacting Customer Lifetime Value (CLV) and ROI for sales and marketing investments.

My churn story

My first lesson around this came earlier in my career when I was vice president and general manager of T-Mobile’s emerging devices business. At the time, T-Mobile had just launched its 4G high-speed mobile data network. Competitors had gotten a head start in this market, so we were eager to capture a share of this quickly growing space.

My team launched the company’s first 4G capable devices including connected tablets and mobile hotspots. After a few stumbles and months of hard work, we found that we had a winner on our hands. Our sales team across the country rallied around the new product line to pitch to customers, and sales started ramping up – with high double-digit growth each month.

While initial sales were strong, we soon discovered we had a serious issue. Our churn rate, or the percentage of customers that left us, was over 10% per month! At that rate, we would need to replace our entire customer base each year just to tread water. Further, our business model, when factoring in our projected customer acquisition and operating costs, required we retain customers for much longer to maintain positive CLV. We had a serious problem.

As we got to work on solving for the gap, we found several issues. First and foremost, we were targeting the wrong customers. Many of our customers, which included small business owners or entrepreneurs, had purchased our 4G mobile hotspots as a home or small office broadband replacement, rather than its intended use as a mobile solution allowing connectivity on the go.  We also had the wrong incentives in place. Sales compensation was heavily weighted toward customer acquisition, not retention.

By identifying the right customers and refocusing our efforts, we were able to turn the corner. We revamped our marketing materials, sales and support training, and even tweaked our compensation incentives to ensure that we were positioning the solution to the right customers for the right purpose. Overall, our sales slowed, but churn rate also dramatically decreased. By refocusing our brand strategy, we were more profitable and driving more value for the company – with fewer customers.

The customer retention challenge with T-Mobile is not an uncommon story. In my experience as a consultant, customer churn – driven by a lack of definition or focus on target customers – is a widespread problem. It also explains why on average companies waste more than 25% of their marketing spend on the wrong strategies and channels.

With that overview in mind, here are three key areas companies need to focus on to avoid customer churn:

1. Put your Most Valuable Customers at the center of your brand strategy

Regardless of what industry you work in, serving and over-serving your most valuable customers is the single most important factor in any company’s success. Your MVCs should be the center of gravity for your entire brand strategy – from the products and services you bring to market to your marketing and sales efforts to reach them.

Often when there’s a spike in customer churn, we see companies react and look for a culprit. Comments you may hear include: “The product team isn’t doing enough to improve CX.” “The marketing team isn’t doing enough to retain customers or generate quality leads for sales,” etc. While reacting or looking for a culprit to blame is human nature, this can often lead to short-sighted and siloed efforts by the product, sales and marketing teams.

To be successful in today’s market, everyone across the organization needs to align around a common definition of who the target customer is, what their needs are and what motivates them. You need a single version of the truth around what customers you’re targeting as well as what customers are bad for business. Key questions marketing leaders needs to ask:

  1. Have we done the upfront segmentation work needed to truly know who our MVCs are? 
  2. Do we have the right tools in place, customer insights and data to allow the product, marketing and sales teams to identify, listen to and effectively target those customers? 
  3. What type of customers should we stay away from?  

Leading brands develop a 360-degree view of their MVCs, who they are and how, where and what they buy and why. Identifying your most valuable customers also forces you to make important choices and prioritizations across the business – from your overarching brand strategy to product development, marketing, pricing and customer support.

2. Size your product and experience to the customer

Whether your company is a service provider with a recurring revenue model, or one with a large portfolio of products, you need to know exactly what features, experiences, pricing and messaging are going to resonate with your MVCs or priority segments. Day in and day out, you need to put the voice and experience of your target customers (not all customers) at the center of your product development and CX efforts.

This type of design thinking is absolutely critical for any company to succeed long term. It’s also vital that marketing has a strategic seat at the table with product development team to apply market insights and feedback from target customers to shape future products and CX improvements.

Resist acquiring customers using steep discounts, or getting them in the door by offering a “minimal viable product” with a reduced feature set that will likely not fit their needs. Rather, look to “right-fit” the customer with the product or service option that best meets their needs. That approach is far more likely to build satisfaction and loyalty over time.

3. Embrace experimentation

Customers today expect brands to anticipate their needs in the moment – from their personal shopping preferences to surfacing the right content and experience based on how, where and when they engage. Doing this well requires a cross-company effort to apply both better data and analytics and qualitative insights and voice of the customer feedback to anticipate what your high-value customers want, and in turn surprise and delight them, improving acquisition and retention.

Through experimentation and applying better data and analytics, brands are able to implement strategies to better retain and acquire your MVCs. Key questions leaders need to explore include:

  1. What are low-cost or low-risk ways to encourage trial of a product or service to ensure the right “fit” before a customer makes a purchase decision? 
  2. How do we design our marketing and sales process, onboarding, and overall user experience to ensure the right customer is matched with the right product or service? 
  3. How can we better tailor our brand positioning, messaging and CX to spark interest and loyalty with the right customers, encouraging word of mouth referrals?

Why this all matters

By investing in the right marketing and sales strategy up-front, identifying the right, high-value customers to target, and optimizing for those specific customer journeys, companies can drive far stronger customer growth, retention and brand equity over time. But all of this only works if you’re focused on the right customers in the first place.

Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.

About The Author

Jeremy Korst is the President of GBH Insights, a leading marketing strategy, consumer behavior and analytics consultancy. In his role, Jeremy works closely with Fortune 500 brands and CMOs to solve marketing challenges, improve customer experience and create strategies for growth. Prior to GBH, Jeremy held CMO or senior executive roles with Avalara, Microsoft, T-Mobile, among other brands. Jeremy holds an MBA with The Wharton School of Business and lives in the Seattle area.


Group Nine announced Friday it is launching Sparkle Ads, a custom, rich media, e-commerce ad product. The format builds on the capabilities of the Sparkle platform — a mobile-first publishing platform geared toward creating commerce experiences across retailer sites. Group Nine gained the Sparkle platform with the acquisition of POPSUGAR in October.

Why we care

Sparkle Ads is the latest in a trend to ease mobile shopping and support vendor programs as marketers look for new ways to shorten the gap between product discovery and purchase. With Sparkle Ads, brands can create shoppable ads that feature the products they sell on Amazon and Walmart. Media companies, social and marketplace platforms have been rapidly developing shoppable media formats and on-platform checkout to attract commerce marketing budgets.

Campbell’s and a global cleaning brand will be the first to test Sparkle Ads on POPSUGAR’s website. The company plans to make the ads available across other Group Nine’s other media brands: The Dodo, Thrillist, NowThis and Seeker.

More on the news

  • The Sparkle allows brands to facilitate commerce across multiple retailer sites and allows retailers to create customized pages featuring multiple product SKUs and collect purchase intent data.
  • Advertisers will eventually be able to run campaigns across other publishers’ websites and the extended list of retailers Group Nine has partnerships with (Walmart, Amazon, Kohl’s, The Gap, etc.).
  • Group Nine touts itself as the leading mobile video publisher in the U.S. based on a July, 2019 report from Nielsen and says it reaches 70% of 20- to 29-year-old Americans every month.

Don’t miss the two-day Digital Commerce Marketing track at SMX West in San Jose, CA, Feb. 19 and 20.

About The Author

Amy Gesenhues is a senior editor for Third Door Media, covering the latest news and updates for Marketing Land, Search Engine Land and MarTech Today. From 2009 to 2012, she was an award-winning syndicated columnist for a number of daily newspapers from New York to Texas. With more than ten years of marketing management experience, she has contributed to a variety of traditional and online publications, including MarketingProfs, SoftwareCEO, and Sales and Marketing Management Magazine. Read more of Amy’s articles.


The MarTech Conference is for forward-thinking, senior-level marketers who are convinced that modern marketing is agile, customer-centric, decentralized, process-driven, people-powered… and technology-enabled.

If your organization is moving from one in which marketing is assisted by martech to one where martech is absorbed in marketing, and if you agree that martech is marketing — this conference is for you.

Join us April 15-17 in San Jose to push the limits of marketing execution, process, and people. The all-new agenda is live and ready for you to explore! You’ll unlock creative, vendor-agnostic solutions and actionable tactics for overcoming everyday marketing challenges, like…

  • Structuring marketing teams and stacks to ensure organizational success
  • Benchmarking multi-channel stacks to thrive in 2020 and beyond
  • Modernizing social media martech to unlock innovation opportunities
  • Fostering unified enterprise workflow to enable more efficient marketing
  • Leveraging APIs to help meet customer expectations

…and dozens more. Each editorial track — Marketing Operations, Marketing Data, Marketing Technology, and Marketing Orchestration — tackles a pillar of your profession; each session delivers practical advice and insights you can implement immediately.

Keynotes from digital analyst Brian Solis, SalesForce Principal of Marketing Insights, Mathew Sweezey, and communications expert Nancy Duarte will inspire, inform, and empower. You’ll learn how to leverage AI to improve customer experiences, motivate buyers in an age of infinite media, and use data to tell compelling, humanizing narratives.

Don’t miss this opportunity to learn first-hand from some of the world’s most successful brands, including Wells Fargo, Lyft, DocuSign, Salesforce, CaringBridge, GrubHub, Vonage, Gartner, Twitter, and many more.

Like what you see? Check out the complete agenda!

Ready to register? Choose your pass based on goals and budget:

  • All Access: Complete access to all conference sessions, keynotes, networking events, exhibiting martech companies, sponsor presentations, amenities, and more. Book today and enjoy $450 off on-site rates.
  • All Access Workshop Combo (best value!): Dive deeper and learn more with a half-day, pre-conference workshop. These expert-led workshops deliver a unique opportunity to train in an immersive, intimate environment. Book today and enjoy $900 off on-site rates! (Workshop-only passes are also available.)
  • Expo : Searching for marketing technology? Focused on growing your network? Pick up a free Expo pass to enjoy unlimited Expo Hall access, full-length Solution Track sessions, sponsor presentations in the Discover MarTech Theater, downloadable speaker presentations, refreshments, WiFi, and more.
  • Team Rates: MarTech is a fabulous option for your 2020 company outing. Attend as a group for a unique team-building experience and lock in extra savings while you’re at it!

Alpha rates expire February 15… book now to enjoy these fantastic savings. Once they’re gone, they’re gone.

See you in San Jose 🙂

Psst… Don’t miss your chance to earn worldwide recognition from the MarTech community! Submit your marketing technology stack to the 2020 Stackie Awards to join the ranks of past winners including Cisco, Marin Software, Merkle, and more.

Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.

About The Author

Scott Brinker is the conference chair of the MarTech® Conference, a vendor-agnostic marketing technology conference and trade show series produced by MarTech Today’s parent company, Third Door Media. The MarTech event grew out of Brinker’s blog, chiefmartec.com, which has chronicled the rise of marketing technology and its changing marketing strategy, management and culture since 2008. In addition to his work on MarTech, Scott serves as the VP platform ecosystem at HubSpot. Previously, he was the co-founder and CTO of ion interactive.


Yell Business has launched a new tool that shows graphic designers how much better off they would be working in 25 different countries across the globe. The research takes into account the average salary and cost of living in countries such as China, South Africa, France and Brazil to calculate equivalent earnings worldwide.

China, Saudi Arabia and Switzerland topped the chart, with the UK coming in at number 21 out of the top 25 countries. While China and Saudi Arabia both have a lower cost of living than the UK, meaning that the average salaries (£27,897 in China and £26,874 in Saudia Arabia, compared with £22,438 in the UK) have been multiplied, Switzerland still came third despite being significantly more expensive to live in than the UK. The average salary for a graphic designer in the country is a staggering £63,670.

Japan, the US, Australia and France all ranked highly in the table (which you can see below), despite having a higher cost of living than the UK. Although the chart has only been compiled using average salary data, it sheds light on the value given to various roles and how they differ between counties.

Yell CEO Claire Miles said in a statement about the project: “Our aim is to help small business owners from across the UK make the most out of their choice of trade. By looking at how equivalent earnings from around the world differ and how the UK stacks up against them, we aim to arm those who may be thinking of relocating with as much insight as possible as to the salaries and expenses they can expect across the world.”

The research, which zoomed in on 28 job roles from accounting to bricklaying, was compiled using world salary data from nine sources, including Glassdoor, Salary Expert and Erieri. This was then cross-referenced with cost of living data from Numbeo’s November and 2019 figures.


In their control version (A) we can see a pricing plan layout with the core features listed once and then referenced under each plan. In the variation however (B), the same information and labels were re-organized and repeated into three self-contained tiles.

One could argue that the A version makes comparison a little easier. This is even more evident in the mobile views where the A version had a floating plan selector that acted as a reference point. In other words, when users scroll on mobile, they would always see comparable information between all three plans (without needing to scroll up and down).


Some tools allow us to move pixels and some affect the bigger picture.

Uri Paz

In his recent lecture last year “The Tools We Use: Challenging Dogma in the Design Process,” Emmet Connolly, Director of Product Design at Intercom, broadened the definition of tools.

He introduced a pencil that, in addition to expanding our physical abilities, increases our intellectual capabilities.

To illustrate his point, he discussed solving a complicated math problem: the square root of 200 times 57. Although we wouldn’t be able to do it in our head, by using wood pulp, graphite and a piece of paper we probably understand the answer pretty quickly.

White pencil on white paper

White pencil on white paper

Photo by Yoann Siloine on Unsplash

To solve a problem or achieve a specific goal we need to use different kinds of tools — tools that help us in the building itself and tools that allow us to think and make decisions — KPI is one of them.

A Key Performance Indicator (KPI) is a measurable value that shows how effectively a company is achieving its key business objectives.

Success is based on the question “What is the goal?”

The goal is the outcome that we want to achieve.

The measure is based on the question “How many?”

By using the “How many” question we create a basis for comparison, i.e. whether or not the choices we made serve our goal.

1. A KPI is a tool to communicate

Common language within the team and between departments:

When we work with colleagues, we have to make sure that we measure success in a similar way. We want to avoid the situation when one team member says “what a failure” and another team member says “what a great success.” Everyone should agree or at least understand the KPI, when we are successful and when we’re not.

At the macro level, KPI alignment allows employees from different teams, such as UX, Development, Support, Marketing and Finance to speak in a common language and step towards the goal.

People are building a house

People are building a house

Photo by Randy Fath on Unsplash

Common language with management:

The best way to convey the importance of design to management is to start talking numbers as these allow us to compare results before and after.

The language of numbers is familiar to people who are planning projects or responsible for business operations. Using these metrics will make our communication more objective.

2. A KPI is a tool to spot problems

As a KPI can show us improvement, it can also indicate a failure. For example, what are the critical points in a particular funnel? Indeed, one of the UX designer’s role is to find and solve the right problems.

Lack of work with a KPI can create wrong assumptions that have a possibility to make our house of cards collapse.

UX Designers can find a source of inspiration, motivation and much more in KPIs.

3. A KPI is a tool to turn abstract into concrete

A strategy is abstract by definition, but measurable values give it a shape and allow our minds to grasp it more easily.

Each strategy has its relevant KPIs, for example: growth in revenue, a number of customers retained, saving of user’s time, decrease in calls to help desk, reduction of user errors, etc.

The right KPI creates prioritization that will bring more clarity and focus.

4. A KPI is a tool to connect the business goals and the user experience

“To understand products, it is not enough to understand design or technology: it is critical to understand business.” — Donald A. Norman

The success measure of a product produces a system of interactions between the business interest and the interest of the user experience.

Sometimes the business goal is linked with intuitive user behavior and sometimes with different behavior than the user expects it to be.

A good designer is measured by producing a good user experience that serves the business goal.

5. A KPI is a tool to make an impact

How can KPIs help us be influential?

  • It makes the impact of design decisions more objective and prevents doubt in its value.
  • By using it we can generate motivation — “we’re going to move that needle.”
  • It can help us to take a more proactive position to help organizations reach strategic goals.

Measuring success with measurable value doesn’t have to be just business.

During the past 6 months, I’ve been training to tone my body. The reason I enjoy the process and persist with the training program is that I see results and progress. Reaching my goal includes the following steps:

1. Defining Goal

The goal is the starting point for defining appropriate KPIs — but only if the goal is clear, specific, achievable, relevant and measurable.

Example: My goal is to tone my body, which means to reduce body fat and gain muscle mass.

2. Defining Indicators

For each goal, we have to define Indicators that will allow us to monitor and measure success.

Example: My indicators are: body weight, body fat, arm circumference and waist circumference.

3. Defining Targets

For each KPI, we have to set clear targets for successful performance.

Example: My target is that in four months my body weight will be 73kg, my body fat will be 13%, my arm circumference will be 35 cm and my waist circumference will be 82 cm.

4. Defining Data Collection Frequency

Some KPIs require data to be collected continuously, others specify hourly, daily, monthly, quarterly or annually.

Example: Each month I weigh myself and make changes in my training and nutrition program accordingly.

5. Defining Expiration/Revision Date

The KPI should include an expiration or revision date.

Example: In four months (10th of May 2020), I’ll update my targets.

6. Defining Way to Communicate the KPIs

We need to make our results obvious, therefore we have to think of a way to communicate our KPIs clearly.

Example: I made the following table:

Table that contain the following indicators: body weight, body fat, arm circumference and waist circumference.

Table that contain the following indicators: body weight, body fat, arm circumference and waist circumference.

There are ups and downs, but overall, tracking the numerical value gives me the motivation to practice hard and eat right to get to the next weighing in better condition.

“Dreams are for the dreamers. Goals are for achievers.” — Arnold Schwarzenegger

A pile of weights in the gym.

A pile of weights in the gym.

Photo by Jesper Aggergaard on Unsplash

“You don’t go to a restaurant and order a meal because you want to have a shit.” — Banksy

Indicators can show us just the fact that a person went to a restaurant and had a meal, but they can’t tell us about his experience. Through quantitative research, we compare numbers that can be measured but we cannot rely only on it because numbers can’t be used to answer everything.

For example, when we want to check a funnel — when the users execute a particular action in the sequence of actions we mark it as done ✔️ — which is considered a success. However, between 0 and 1, the metric can show success but it doesn’t necessarily indicate a good user experience.

Actually, KPIs don’t do anything to improve performance. They only tell us if we’re achieving our target.

To get specific information we have to talk with users — what is the user’s background, what they choose to say first, what they choose to emphasize or not, what is their work process…

Quantitative research gives us a basis, qualitative research gives us depth.

We need to combine quantitative and qualitative metrics to validate the data in the present and in addition, to predict the future.

“Tools help us do our work better.” — Steve Jobs

The dialogue of designers often includes talk about different kinds of tools. Some tools allow us to move pixels and some affect the bigger picture.

This tweet sharpens the question — how do we as a UX community think and talk about tools?

While Product Managers have realized a long time ago that good product management cannot come without an understanding of user experience tools, UX designers need to realize that without using tools like KPIs, it is difficult to produce a relevant user experience that serves business goals.

This is no longer just a theory, good design drives organizations to deliver better products and services. It’s been proven frequently.

As mediators between the users’ satisfaction and business goals, our toolbox should include tools like KPIs.

This tool is one of the keys for UX designers to move forward and change our reputation. By measuring success, blurred concepts become clearer, principles are reinforced and there is a distinction between desirable and undesirable.

A KPI is a tool for UX designers to communicate, research, understand, criticize and influence.


Climate change is taking its toll on the environment. The temperature is rising, the Glaciers are melting, sea levels are rising, and the fauna and flora ranges are shifting.

According to the Intergovernmental Panel on Climate Change

“Taken as a whole, the range of published evidence indicates that the net damage costs of climate change are likely to be significant and to increase over time.”

From the very start, all of these effects and many others account for the change have accounted for critical damage induced mainly by human activity in the form of CO2 emissions. 

In the midst of it all, the internet is a prominent cause to blame. Sure, the virtual world is affecting the real world on a massive scale; how can a UX designer or developer be involved in the overall impact? Take an ecommerce site for example. As soon as the visitor approaches the website to buy their desired products, their actions initiate a chain of events that contribute to higher emissions. 

It’s even reported that if the internet were a country, it would rank among the ones having a high carbon footprint, such as the USA, China, and India. Where the internet connects so many people, it’s also inviting other drastic measures for the evolving carbon footprint. 

For most of the global climate change impacts, sustainable practices and resources have been brought in to combat and reduce the apparent damage shortly. Is there a way designers or web design companies worldwide can contribute? 

Instead of digging in the solutions first, let’s start by identifying how UX is indirectly contributing to the environmental damage and what you should know about creating green UX projects. 

How UX Contributes To Growing CO2 Emissions 

Data Centers And E-Waste

Every activity you carry out on a website or any other online platform, you need storage for your projects, files, texts, documents, and other media. You may even contact a company that provides data storage of XYZ GBs. While you may be at ease for having saved your information, the data in cloud storage is negatively impacting the environment. According to Amanda Sopkins from Sustainable UX, 

“Notifications replace daily newspaper deliveries, emails replace receipts, and virtual clouds replace boxes of photographs. At first, this seems like a natural way to build a more sustainable world. However, rather than leveraging our power to create minimal solutions for hard problems, we gorge ourselves on this seemingly limitless space.” She later adds, “The buildup of all this material has an impact beyond the carbon that is produced by the servers that stores it. It has an impact on us: we lose track of what matters and what we should save.”

With millions of people interacting on the internet daily, it won’t be wrong to say that the data is massively hurling towards hyper-emission. It’s something we don’t see every day!

Mobiles And Energy Consumption 

Each day, a new smartphone is released. The growing number of these devices is serving the energy-hungry populations, becoming somewhat blind to the environment. The process of smartphone production and delivery to the end-consumer greatly impacts the CO2 emissions. For instance, an iPhone requires several hours to be created and assembled and takes up to hundreds of thousands of miles, fuel, and energy to reach the user. 

The case gets worse with mobile screens becoming larger. The bigger it gets, the more materials it needs, and the more energy it consumes. How does this account for environmental damage? Every activity done on the phone requires energy and storage. As much data is stored in the data centers, more resources are exploited and greater damage is done. 

Ways To Implement Green, Sustainable UX

While we mentioned just two examples of how UX is hurting the environment, we shouldn’t leave behind the measures to scale down the emission pressure. Here are a few ways we can think of. 

1. Measure Your Carbon Foot Print 

There’s no point in creating something if you can’t measure it. And when you can’t measure it, it will become impossible for you to manage it. When creating a sustainable user experience, one needs to review the existing research on the on-going web emissions and energy consumption. The sole purpose must base on understanding how this information can be used to estimate and cut website pollution.

For this purpose, websitecarbon.com launched a free tool to measure the amount of CO2 a website is emitting. Upon entering the website URL, the tool rolls out the information about how much CO2 your website is emitting, and if it is following a sustainable approach. 

2. Make The Experience Accessible For All 

Among the practices for a sustainable UX, usability comes as the first and foremost priority, which can be enhanced with factors like readability and navigation. If the content on the website is light, guides the user expertly, and helps them recognize value, the overall experience expands into an accessible dimension. 

The website’s cleanliness also matters. Less clutter and speedy loading mean lower carbon emission. If the content is presented with great clarity and directions, it builds a connection between sustainability and user experience. 

3. Optimize Performance 

Sometimes, websites designed with a user-focused approach have performance issues that have adverse impacts on the user experience. By optimizing the site, you not only reduce data consumption and emissions but also engage the users with satisfaction. 

So, what type of content needs optimization? Text, images, and videos make up your website’s content, and optimization of these means improving your website’s performance and data usage. By using clear, concise text, high-quality images and videos, your website cuts down on the page loading time without losing the overall quality. 

4. Teach Sustainability Through Practice 

Designing for a better environment receives returns when the users interact with the design. Users are generally unaware of how a clean and eco-friendly website looks like, while they might be concerned about the environment in the other way. 

At this point, it’s your responsibility to drop hints about a sustainable design along with motivating them to achieve green goals in a better way. However, you need to address what you’re making it, how people will use it, and how it reflects your strategy to create a safer environment. Your intent should be evident in your message throughout your website.  


About The Author

Rachel Andrew is not only Editor in Chief of Smashing Magazine, but also a web developer, writer and speaker. She is the author of a number of books, including …
More about

In a new series, Rachel Andrew breaks down the CSS Grid Layout specification. This time, we take a look at how to use the grid lines to place items.

In the first article in this series, I took a look at how to create a grid container and the various properties applied to the parent element that make up your grid. Once you have a grid, you have a set of grid lines. In this article, you will learn how to place items against those lines by adding properties to the direct children of the grid container.

We will cover:

  1. The placement properties grid-column-start, grid-column-end, grid-row-start, grid-row-end and their shorthands grid-column and grid-row.
  2. How to use grid-area to place by line number.
  3. How to place items according to line name.
  4. The difference between the implicit and explicit grid when placing items.
  5. Using the span keyword, with a bit of bonus subgrid.
  6. What to watch out for when mixing auto-placed and placed items.

Basic Concepts Of Line-Based Positioning

To place an item on the grid, we set the line on which it starts, then the line that we want it to end on. Therefore, with a five-column, five-row grid, if I want my item to span the second and third column tracks, and the first, second and third row tracks I would use the following CSS. Remember that we are targetting the line, not the track itself.

.item {
  grid-column-start: 2;
  grid-column-end: 4;
  grid-row-start: 1;
  grid-row-end: 4;

This could also be specified as a shorthand, the value before the forward slash is the start line,m the value after is the end line.

.item {
  grid-column: 2 / 4;
  grid-row: 1 / 4;

On CodePen you can see the example, and change the lines that the item spans.

See the Pen Grid Lines: placement shorthands by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Lines: placement shorthands by Rachel Andrew (@rachelandrew) on CodePen.

Note that the reason our box background stretches over the entire area is because the initial values of the alignment properties align-self and justify-self are stretch.

If you only need your item to span one track, then you can omit the end line, as the default behavior is that items span one track. We see this when we auto place items as in the last article, each item goes into a cell – spanning one column and one-row track. So to cause an item to span from line 2 to 3 you could write:

.item {
  grid-column: 2 / 3;

It would also be perfectly correct to miss off the end line:

.item {
  grid-column: 2;

The grid-area Shorthand

You can also place an item using grid-area. We’ll encounter this property again in a future article, however, when used with line numbers it can be used to set all four lines.

.item {
  grid-area: 1 / 2 / 4 / 4;

The order of those line numbers is grid-row-start, grid-column-start, grid-row-end, grid-column-end. If working in a horizontal language, written left to right (like English), that’s top, left, bottom, right. You may have realized this is the opposite of how we normally specify shorthands such as margin in CSS – these run top, right, bottom, left.

The reason for this is that grid works in the same way no matter which writing mode or direction you are using, and we’ll cover this in detail below. Therefore, setting both starts then both ends makes more sense than mapping the values to the physical dimensions of the screen. I don’t tend to use this property for line-based placement, as I think the two-value shorthands of grid-column and grid-row are more readable when scanning through a stylesheet.

Lines On The Explicit Grid

I mentioned the explicit versus the implicit grid in my last article. The explicit grid is the grid that you create with the grid-template-columns andgrid-template-rows properties. By defining your column and row tracks, you also define lines between those tracks and at the start and end edges of your grid.

Those lines are numbered. The numbering starts from 1 at the start edge in both the block and inline direction. If you are in a horizontal writing mode, with sentences which begin on the left and run towards the right this means that line 1 in the block direction is at the top of the grid, and line 1 in the inline direction is the left-hand line.

Item is shown in position with the Firefox Grid Inspector highlighting the lines
The item placed on the grid

If you are working in a horizontal RTL language – as you might be if working in Arabic – then line 1 in the block direction is still at the top, but line 1 in the inline direction is on the right.

The item is now placed from the right-hand side of the grid
The same placement with direction: rtl

If you are working in a Vertical Writing Mode, and in the image below I have set writing-mode: vertical-rl, then line 1 will be at the start of the block direction in that writing mode, in this case on the right. Line 1 in the inline direction is at the top.

The entire grid is now rotated 90 degrees
The same placement in writing-mode: vertical-rl

Therefore, grid lines are tied to the writing mode and script direction of the document or component.

The end line of your explicit grid is number -1 and lines count back in from that point, making line -2 the second from the last line. This means that if you want to span an item across all tracks of the explicit grid you can do so with:

.item {
  grid-column: 1 / -1;

Lines On The Implicit Grid

If you have created implicit grid tracks then they also count up from 1. In the example below, I have created an explicit grid for columns, however, row tracks have been created in the implicit grid, where I am using grid-auto-rows to size these to 5em.

The item with a class of placed has been placed to span from row line 1 to row line -1. If we were working with an explicit grid for our two rows, then the item should span two rows. Because the row tracks have been created in the implicit grid, line -1 resolved to line 2, and not line 3.

See the Pen Grid Lines: explicit vs. implicit grid by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Lines: explicit vs. implicit grid by Rachel Andrew (@rachelandrew) on CodePen.

There is currently no way to target the last line of the implicit grid, without knowing how many lines you have.

Placing Items Against Named Lines

In the last article I explained that in addition to line numbers, you can optionally name lines on your grid. You name the lines by adding a name or names inside square brackets between your tracks sizes.

.grid {
  display: grid;
  grid-template-columns: [full-start] 1fr [main-start] 2fr 2fr [main-end full-end];

Once you have some named lines, you can swap out the line number for a name when placing your items.

.item {
  grid-column: main-start / main-end;

See the Pen Grid Lines: naming lines by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Lines: naming lines by Rachel Andrew (@rachelandrew) on CodePen.

If your line has several names, you can pick whichever one you like when placing your item, all of the names will resolve to that same line.

Note: There are some interesting things that happen when you name lines. Take a look at my article “Naming Things In CSS Grid Layout” for more.

What Happens If There Are Multiple Lines With The Same Name?

You get some interesting behavior if you have multiple lines that have the same name. This is a situation that could happen if you name lines within repeat() notation. In the example below I have an 8 column grid, created by repeating 4 times a pattern of 1fr 2fr. I have named the line before the smaller track sm and the larger track lg. This means that I have 4 lines with each name.

In this situation, we can then use the name as an index. So to place an item starting at the second line named sm and stretching to the third line named lg I use grid-column: sm 2 / lg 3. If you use the name without a number that will always resolve to the first line with that name.

See the Pen Grid Lines: naming lines by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Lines: naming lines by Rachel Andrew (@rachelandrew) on CodePen.

Using The span Keyword

There are situations where you know that you want an item to span a certain number of tracks, however, you don’t know exactly where it will sit on the grid. An example would be where you are placing items using auto-placement, but want them to span multiple tracks rather than the default 1. In this case, you can use the span keyword. In the example below, my item starts on line auto, this is the line where auto-placement would put it, and it then spans 3 tracks.

.item {
  grid-column: auto / span 3;

See the Pen Grid Lines: span keyword by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Lines: span keyword by Rachel Andrew (@rachelandrew) on CodePen.

This technique will become very useful once we have wide support of the subgrid value for grid-template-columns and grid-template-rows. For example, in a card layout where the cards have a header and main content area in which you want to align with each other, you can cause each card to span 2 rows, while still allowing for the usual auto-placement behavior. The individual cards will use subgrid for their rows (i.e. getting two rows each). You can see this in the below example if you use Firefox, and read my article CSS Grid Level 2: Here Comes Subgrid to learn more about subgrid.

See the Pen Grid Lines: span keyword and subgrid by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Lines: span keyword and subgrid by Rachel Andrew (@rachelandrew) on CodePen.


A grid of cards with the Firefox Grid Inspector showing they each sit over two rows of the grid
The example in Firefox using the Grid Inspector

Layering Items With Line-Based Placement

Grid will auto-place items into empty cells on the grid, it won’t stack items into the same cell. However, by using line-based placement you can put items into the same grid cell. In this next example, I have an image that spans two-row tracks, and a caption which is placed in the second track and given a semi-transparent background.

See the Pen Grid Lines: card with layered elements by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Lines: card with layered elements by Rachel Andrew (@rachelandrew) on CodePen.

Items will stack up in the order that they appear in the document source. So in the above example, the caption comes after the image and therefore displays on top of the image. If the caption had come first then it would end up displaying behind the image and we wouldn’t be able to see it. You can control this stacking by using the z-index property. If it was important for the caption to be first in the source, then you can use z-index, with a higher value for the caption than the image. This would force the caption to display on top of the image so that it can be read.

Mixing Line-Based And Auto-Placement

You need to take a little extra care if you are mixing placed items with auto-placed ones. When items are fully auto-placed in grid, they will place themselves sequentially onto the grid, each finding the next available empty space to put themselves into.

See the Pen Grid Lines: auto-placement by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Lines: auto-placement by Rachel Andrew (@rachelandrew) on CodePen.

The default behavior is always to progress forwards, and to leave a gap if an item does not fit on the grid. You can control this behavior by using the property grid-auto-flow with a value of dense. In this case, if there is an item that fits a gap already left in the grid, it will be placed out of source order in order to fill the gap. In the example below using dense packing, item 3 is now placed before item 2.

See the Pen Grid Lines: auto-placement and dense packing by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Lines: auto-placement and dense packing by Rachel Andrew (@rachelandrew) on CodePen.

Note that this behavior can cause problems for users who are tabbing through the document as the visual layout will be out of sync with the source order that they are following.

Auto-placement works slightly differently if you have already placed some items. The placed items will be positioned first, and auto-placement will then look for the first available gap to start placing items. If you have left some whitespace at the top of your layout by way of an empty grid row, then introduce some items which are auto-placed, they will end up in that track.

To demonstrate in this final example I have placed with the line-based positioning properties, items 1 and 2 leaving the first row empty. Later items have moved up to fill the gaps.

See the Pen Grid Lines: auto-placement mixed with placed items by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Lines: auto-placement mixed with placed items by Rachel Andrew (@rachelandrew) on CodePen.

This behavior is worth understanding, as it can mean that items end up in strange places if you introduce some new elements to your layout which haven’t been given a placement on the grid.

Wrapping Up

That is pretty much all you need to know about grid lines. Remember that you always have numbered lines, no matter how else you are using grid you can always place an item from one line number to another. The other methods we will look at in future articles are alternate ways to specify your layout, but are based on the grid created by numbered lines.

Smashing Editorial(il)


Facebook quietly announced a new feature coming later this month that will allow users to control whether or not an advertiser can show ads to them using a Custom Audience list.

“People have always been able to hide all ads from a specific advertiser in their Ad Preferences or directly in an ad. But now they will be able to stop seeing ads based on an advertiser’s Custom Audience from a list,” Facebook wrote the corporate blog.

Why we care

Custom Audiences are often comprised of a company’s active and best customers. Not being able to show ads to a critical mass of those users could potentially hurt campaign performance. However, this kind of control could be a net positive for advertisers as well as users. When users have the ability to opt-out, marketers have greater incentive to think carefully about the relationships they have with the customers on their lists and the messaging they’re serving them.

Facebook has made a number of changes to how Custom Audience lists are managed in the past year. This latest update is a more proactive feature — moving beyond simply giving users information on why they’re being targeted and letting them control if they see the ad or not.

The announcement was included in news related to allowing users to hide political ads, but the soon-to-be released feature applies to all campaigns using Custom Audiences.

More on the news

About The Author

Amy Gesenhues is a senior editor for Third Door Media, covering the latest news and updates for Marketing Land, Search Engine Land and MarTech Today. From 2009 to 2012, she was an award-winning syndicated columnist for a number of daily newspapers from New York to Texas. With more than ten years of marketing management experience, she has contributed to a variety of traditional and online publications, including MarketingProfs, SoftwareCEO, and Sales and Marketing Management Magazine. Read more of Amy’s articles.


Attracting new customers is a top priority for retailers, and customer acquisition costs have risen with the increase in budgets and demand. Oracle Retail has launched Consumer Insights to help retail marketers use enriched customer data attributes alongside third-party consumer data from Oracle Data Cloud to find prospective lookalike customers.

Oracle Cloud provides data sets composed of profile-based, transaction-level data along with other demographic attributes. Consumer Insights overlays that third-party data over retailers’ own first-party customer data to discover attributes of their customers and create new segments. Those segments can be used to create similar audiences for prospecting.

Why we care

There is a growing demand from marketers for more predictive analytics and scalable methods to collect and analyze customer data and generate new target segments. Maintaining — and acting on — customer data attributes such as interests and purchase propensity could help marketers with their customer acquisition efforts.

“The value of data can’t be found in zeros and ones, but in human connections to the interests, experiences, and behavior of current and potential customers,” said Cecilia Mao, vice president of product of Oracle Data Cloud. “When you know that your customers are also more likely to buy at the grocery store, brand affinity and hobbies, you can build more accurate models to find your best potential customers, then use multiple channels to reach them at scale.”

More on the news

  • A recent survey found that 77 percent of retailers intend to increase their budget for customer acquisition in 2020.
  • Consumer Insights executes a “what if” analysis to identify new prospects in a continuously iterative process to get the best matches.

About The Author

Jennifer Videtta Cannon serves as Third Door Media’s Senior Editor, covering topics from email marketing and analytics to CRM and project management. With over a decade of organizational digital marketing experience, she has overseen digital marketing operations for NHL franchises and held roles at tech companies including Salesforce, advising enterprise marketers on maximizing their martech capabilities. Jennifer formerly organized the Inbound Marketing Summit and holds a certificate in Digital Marketing Analytics from MIT Sloan School of Management.


It was a “record holiday season for Amazon Devices and Alexa,” Amazon’s post-holiday press release proclaimed. The company said that it sold “millions more Amazon Devices compared to last year,” with Echo Dot leading the way.

Google didn’t put out a comparable release about the sale of Home and Nest products, but overall increased sales appear to be validated in extrapolated ownership data from NPR and Edison Research in their latest Smart Audio Report.

60 million people, 157 million speakers. The report is based on a telephone survey of just over 1,000 U.S. adults. It reports that 24% of people over 18 own at least one smart speaker. That translates into about 60 million people. Because people typically own more than one smart speaker, the report estimates there are now 157 million of these devices in U.S. households. That’s up from roughly 119 million a year ago.

In 2017 the average number of devices per household was 1.7. That has grown to 2.6 in December 2019. (By comparison, there are roughly 10 in my home: a mix of Google and Amazon speakers and displays, and one Apple HomePod.)

Voice penetration continues to grow. The report also says that “54% of U.S. adults 18 have ever used voice commands” and 24% of that group does so daily. These statements are context-free, so it’s unclear if the report is referencing smart speakers. However, given these numbers, we must infer that smartphones are in the mix because 54% of U.S. adults don’t own smart speakers.

Missing from the report released this week are use cases, detailed usage data and attitudes toward future purchases. Last year, for example, the same report found intent to purchase additional devices flat-to-declining. However, Amazon’s press release and these figures suggest robust demand over the holiday.

According to the most recent third party estimates, Amazon continues to dominate the market. Amazon controls roughly 70% to 75% of the smart speaker/display market according to analyst reports. However, we should get updated figures in the next month or so.

Why we care. While we’ve arguably reached critical mass with smart speakers and displays, it’s not hard to imagine that this same report next year will tout 200 devices in U.S. homes. Despite their impressive mainstream adoption, smart speakers are barely utilized as a marketing or commerce channel by retailers and brands or by Amazon or Google themselves.

It’s important to note that smart speaker market share is not the same as virtual assistant share. Voice assistants or virtual assistants, whichever you prefer, have reached critical mass in usage because of their deployment on smartphones. And voice is increasingly used as a way to get directions, search for information, send texts and so on. Consumers have embraced the “voice habit,” and Google has responded (see BERT). But it’s still not entirely clear how marketers will capitalize on the behavior.

We’ll have a tactical discussion about how brands can take advantage and optimize for voice search at SMX West, February 19-20 in San Jose.

About The Author

Greg Sterling is a Contributing Editor at Search Engine Land. He writes about the connections between digital and offline commerce. He previously held leadership roles at LSA, The Kelsey Group and TechTV. Follow him Twitter or find him on LinkedIn.


User experience is how users perceive and react when using a product. A user goes through a sequence of moments to get a job done with the product. Once done, the user forms memory of the experience. According to Nobel Prize-winning psychologist Daniel Kahneman, the memory is characterised by the experience’s peak and end. In a paper he co-authored in 1993 titled “When More Pain Is Preferred to Less: Adding a Better End”, he explained:

The peak-end rule is a psychological heuristic in which people judge an experience largely based on how they felt at its peak (i.e. its most intense point) and at its end, rather than based on the total sum or average of every moment of the experience.

The peak-end rule is a good mental scaffolding for product designers. It brings to fore two crucial aspects – peak and end – of an experiential flow. With the awareness, we can try to identify moment slices with (potentially) unpleasantness. Then, figure out ways to make the unpleasantness less intense, or better still, eliminate them altogether. It’s because unpleasantness can adversely taint our evaluation of an entire experience, thanks to our negativity bias — unpleasant experience is psychologically more impactful than the pleasant one.

In addition to the most intense moment, the end is equally important. An experience must end well and on a high note, in order for the creation of a memorable positive experience. I find it rather perplexing to see lackadaisical effort by some Malaysian e-wallets to evoke some kind of celebratory emotion when users get cash-backs. Instead, users are greeted with static and a rather visually stale FYI screens.

But Grab seems to grasp the importance of emotional aspects. While waiting for my food to arrive, the Grab app shows me some lightweight animations and status updates (see below). Not just as visual candy, the up-to-date status is important to alleviate anxieties and make the unpleasant waiting time less, well, unpleasant. These tiny stuff matter as they add up to form the user experience.

After all the waiting, GrabFood service ends it with a friendly smile by the delivery rider. That is, for sure, a better end.


Photo by Luca Upper on Unsplash


It is hard to imagine any other field where lifelong learning is more important than in software engineering. Another unique characteristic is the degree to which learning material is available for free on the internet. On top of that, we create various resources ourselves by documenting issues, submitting bug reports, writing notes, creating documentation, and many others. The sum of all these resources can be called a knowledge base.
You could argue that every developer has a system to manage their personal knowledge base, whether they know it or not. In this post, I explain my knowledge management practices.

A software engineer’s personal knowledge base is likely to overlap with the knowledge of their employers and project partners. Be sure to carefully study your data protection obligations. A good basis is to keep your personal knowledge base strictly technical and to never include data that is related to customers, or people in general.

I chose this topic mainly because of three reasons:

  1. Fun:

    I enjoy thinking about my workflows and trying to improve them. Maybe a little too much, some friends have started to roll their eyes when I try to recommend a new tool to them.
  2. Improving my system and workflows:

    Writing a post about my Linux setup has been very rewarding for me. While documenting my configuration, especially my command-line workflows, I identified some shortcomings that I have since eliminated. After writing this article, I can already say that my knowledge management workflows have improved similarly.
  3. Relevance:

    Knowledge management in all its forms is integral to software engineering.
Knowledge management tasks are done frequently. It pays off to do them efficiently. (Source: xkcd.com)

Knowledge management tasks are done frequently. It pays off to do them efficiently. (Source: xkcd.com)

Much of this post comes down to describing my usage of software tools. The stars will be

Copying all of my practices will likely not serve you well. I invite you to try out things at your own pace and revisit this post whenever you are looking for new ideas.

Even though everybody loves books, or at least says so, myself included, the reality is that we rely on digital resources most of the time.
The challenge is to organize this knowledge efficiently without creating too much overhead.

There are some naive approaches that I do not deem sufficient. Until a few years ago, I relied mainly on browser bookmarks, but for some reason, browsers do not provide proper organizing features.
Another way is to just copy the URLs into notes, project issues, documentation and all other kinds of manually created content. This is viable and we all do it regularly, but it has nothing to do with what I think is bookmark management.

My current setup is a little more complex, but in the end, it does not require more work than keeping bookmarks solely in the browser.
I organize my bookmarks in multiple layers, depending on how often I want to access them and how long they should persist.

Maybe you are thinking of caching now, but fortunately, bookmarking layers are not caching levels. This would be hard to manage because as we all know, there are two hard problems in computer science: cache invalidation, naming things, and off-by-1 errors.
In contrast to cache entries, a bookmark is not supposed to move between layers. Instead, it gets inserted at the right place and stays there.

The following figure illustrates my layers:

Bookmarking layers

Bookmarking layers

1. Chrome Bookmarks

Chrome bookmarks have some nice benefits. They are synchronized between devices, new ones can be added easily, and organizing them in folders brings at least some structure. Another advantage for me is the Albert Chromium extension that lets me access and open my Chrome bookmarks from Albert:

Searching through Chrome bookmarks directly from Albert. Pressing enter opens the selected entry in Chrome.

Searching through Chrome bookmarks directly from Albert. Pressing enter opens the selected entry in Chrome.

Because of these advantages, Chrome bookmarks are ideal for things that I need often and want to have accessible somewhat independent of context. They are, however, not well suited for keeping extensive bookmark libraries. There is no tagging, it is not possible to add comments, and browsers do not even save a timestamp for when a page was bookmarked. It is further not possible to filter and search Chrome bookmarks using complex queries. In my experience, browser bookmark libraries are hard to manage if they grow beyond a certain size.

Some examples of well-suited Chrome bookmarks:

  • AWS EC2 dashboard
  • Google cloud platform console
  • Slack workspaces
  • GitLab boards
  • Content sites, such as Hacker News, InfoQ, and DZone

Examples of things that should not be Chrome bookmarks:

  • Blog posts to be read later
  • Links to Open Source Projects
  • Stack Overflow questions

2. Workona

Workona is a relatively new addition to my toolbox. I have mixed feelings about it, mainly because it can feel a little slow at times.
There are some things it does very well though.

Workona lets you create browser workspaces that come with their own list of bookmarks. This works nicely when you have multiple projects that you work on for longer periods. It will also remember which tabs you had open in a specific workspace and synchronize this information across devices. It is, therefore, a nice place for things that I need often dependent on context.

Screenshot of a Workona Workspace (Source)

Screenshot of a Workona Workspace (Source)

Workona further allows you to add apps and will then collect links and even features for those apps to be quickly accessible. For example, I use it to access my different GitLab and GitHub projects, Slack Workspaces, and sometimes even StackOverflow questions. These entries are automatically created by Workona and therefore I do not see them as bookmarks. It is also possible to access features of these apps directly from Workona, mainly to create new resources, such as GitLab/GitHub projects, Google Docs, DropBox Files and so on. It feels a little bit like Station built directly into the browser.

The selling point for me is that Workona lets me do all of the described things via shortcuts and it is even possible to customize these shortcuts at chrome://extensions/shortcuts.

3. Notion

Finally, we arrive at the layer that is suited for maintaining a large library of bookmarks. For this, and many other use cases, I use Notion. It has become the core of my personal knowledge base.

Notion’s free tier comes with a hard limit on the number of blocks you can create. This means that you will have to switch to the paid version if you use it seriously for a while. It is free for students and teachers.

All links to Notion in this post are affiliate links. If you sign up for an account through these links, you will get 10$ of free Notion credit and I will get 5$. This works even if you create a free account.

The killer feature for me is the database. It is amazingly flexible and is single-handedly able to replace multiple other tools for me. For my bookmark collection, I use a single large database. Adding new items can be done via the Notion Web Clipper. I am a little annoyed that the web clipper doesn’t let me add properties and tags directly, but other than that, it works well. Previously, I used Trello, which was also quite good at keeping bookmarks. However, to limit the number of different tools I use, I replaced it with Notion.

My bookmarks database filtered to show only entries related to this post.

My bookmarks database filtered to show only entries related to this post.

The goal with this database is that I can find something years from now with only a vague memory that it should be there. This is possible because Notion automatically stores metadata, such as a creation timestamp. Even more important, I can add tags and arbitrary properties.

Things get really exciting once you use relations between different Notion databases. As an example, I have a database of my blog posts that has a relation to my bookmarks. I can now filter my bookmarks by blog posts and quickly see which sources influenced a particular post.

I cannot cover all Notion database features in this post. I think this is something you have to see for yourself. The Notion docs page about the topic might give you some further impressions though.

The downside of this rich feature palette is that it requires some discipline. I tag and annotate new entries, that I added via the web clipper, about twice a month. Often, I just delete new items because they do not seem important anymore. If I can not think of appropriate tags or relations for an item, this often means that it is not very relevant to me.

Another important lesson I had to learn the hard way:

Do not use too many databases in parallel. Notion provides excellent methods for filtering, and searching tables. You can even define different views onto the same database that only shows a specific part of the data. It is therefore not required to separate all kinds of things into different databases. For example, at first, I had three different databases for Python resources, Django resources, and Wagtail resources. This was a bad solution. Now, these all live in the same database with appropriate tags.

I hope that someday I will be able to search through my Notion databases from Albert. Maybe I will build an extension myself once the Notion API is finally released.

To illustrate what I have described in probably too many words, you can have a look at a public fraction of my bookmarks database containing all the resources that were helpful for writing this post. The post-column in this public database appears empty because the related table is not public.

4. Native Bookmark Sources

The last layer does not require any work at all. It only means to be aware of native bookmark sources when trying to find something, and also when thinking about adding new bookmarks. For example, it is quite easy to search through questions you have answered on Stack Overflow. It is also not a problem to go back to your Hacker News posts or search through projects you starred on GitHub. Keeping those things in your own bookmark sources adds redundancy and noise.

These are some of my most commonly used native bookmark sources:


I am aware that this sounds immensely complicated. However, I stick to my statement that it does not actually require more work than keeping all bookmarks in the browser. You still have to add every bookmark only once. If you internalize your system, you will not have problems searching for bookmarks as it will be quite obvious where a specific resource could be. Admittedly, it does require some maintenance to keep a large bookmark collection in Notion, but it pays off soon and the pay-off accumulates over time.

Having a proper system for external resources is great, but it does not help much if everything you write yourself is a mess. Therefore, it is essential to organize self-written resources.

Blog Posts

My most successful blog post is about how I run and host this website for free. However, the post does not cover the most work-intensive part about running this site: Writing and maintaining blog posts. For this, I heavily rely on Notion. I have a database with all my past blog posts and ideas for future posts. It is simple to add tags like done, doing, idea, and to view the database as a kanban board with lanes based on those tags.

Drafting Posts

All posts that are published on this blog have been drafted in Notion. When they are more or less finished, I use the markdown export and copy the content into a new text file in my Hugo project. There are some minor issues with this. Usually, I have to adjust code formattings and links, but overall it works very well and has some nice benefits:

  • Edit on all devices

    Using notion, I can easily edit my drafts on all devices, even on my phone when I am moving.
  • Use existing notion content

    I can quickly add code snippets, links, and other stuff that I keep in Notion. This can be done via database relations or just by dragging the respective blocks into the draft page.
  • Lightweight version control

    As developers, we love GIT. However, for adding a couple of lines to a blog post, it is an overkill. Notion has versioning features build-in and keeps track of all changes automatically. Naturally, it does not offer sophisticated merging options, but it’s easy to stay clear of conflicts when you are writing a personal blog.

Once a post is released, I move the post from doing to released.

Keeping Track of Update Ideas

You may have noticed that some of my blog posts are quite long. Keeping them up-to-date is a little challenging. Fortunately, I have Notion to assist me with this. I have a database where I add proposals for updating my existing posts. Using a relation, I link these ideas to the respective posts in my posts database:

It is hard to know when you are finished setting up the perfect Linux workstation.

It is hard to know when you are finished setting up the perfect Linux workstation.

Tracking Sources

To keep track of third-party resources that were helpful in creating a post, I add them to my bookmarks database and link them to my posts database.

My bookmarks database filtered to show only entries related to this post.

My bookmarks database filtered to show only entries related to this post.

You can also have a look at the full list of sources for this post via a public fraction of my bookmarks database.

Taking Notes

I suppose every software engineer takes notes to some extent, even if they do not have a system and just casually write stuff into text files.
There is also a large amount of free and paid software for taking and organizing notes. Some people even like to use physical paper with the BuJo system.

No matter which method you prefer, you have to think about when, why, and how to take notes.

When to take Notes

I thought quite a bit about this question, and previously I did not adhere to any predefined rules regarding this matter. To gain a better understanding of my habits and to decide what worked best in the past, I looked through my notes from the last years that were spread over multiple applications and lots of files. I tried to find common patterns:

  • Capturing information from audio sources

    I am a visual learner and prefer written content over audio. However, there are many types of audio sources that everybody encounters:

    • Meetings
    • Presentations
    • Meetups
    • Conferences
    • Informal Discussions
    • Videos
    • Podcasts
  • Preserving information that is often needed but doesn’t fit in any proper documentation project

    In my case, these are things like the following:

    • Command-lines with multiple parameters, complex options, and paths. I keep them in my notes so that I can quickly copy-paste them in the potentially distant future.
    • Instructions for tasks that will likely have to be repeated in the future, for example
      • setting up a development environment for a specific tech stack
      • deploying a project on a specific hosting service.
  • Questions

    Sometimes I think of a question that I would like to ask someone specific, who is not available at the moment. The question and its eventual answer are a nice use case for a note.

Thanks to a more sophisticated command-line setup with Zsh and various plugins for auto-completion and history-search, preserving command-lines has become much less important for me. If you are still using Bash with its default Ctlr search, I think you would profit from my post on setting up a Linux workstation.

On the other hand, it is important to make clear also what should not be a note in this system:

  • Project-specific knowledge

    Everything that is directly related to a specific software project, should not be a note in your personal knowledge base. Instead, it should go into whichever system is used to keep track of issues, merge requests, documentation, and so on. This post includes a separate section on this below.
  • Information that is needed for a very short time
    Sometimes I need to take notes during a conversation that I will need only immediately afterward. For this, I just open VSCode and type ahead. Once the information is no longer needed, the file can be deleted completely.

In general, don’t overdo and don’t underdo it. If you take notes all the time, you create overhead and redundancy. If you never take notes, well, you will not have notes.

Why Take Notes

The obvious answer is to remember things. This does not mean that note-taking should replace your memory. To the contrary, taking notes provides structure and context and therefore helps your brain build up a map of your knowledge. I believe that taking notes increases the amount of data that you can recall from memory.

Additionally, some things are just very hard to remember, such as complex tech stacks presented at meetups, command-lines with several opaque options, numbers, and so on. Writing these things down can extend your knowledge base significantly. Notes are also very helpful when writing documentation in the future, or for writing blog posts like this one 😉

How to Take Notes

It can be tempting to just type ahead during a talk. Noting down whatever seems interesting at the moment. However, you should meet basic formal requirements and write down some context. Otherwise, you will have trouble extracting useful information from your notes in the future. If you take notes regularly, it is also important to organize them by assigning tags and properties, having a creation date, being able to filter and search through them, and so on. Software can help with these things.

Until about a year ago I kept notes mainly on my local machines, syncing them with Dropbox. Applications that focus almost exclusively on note-taking, such as Evernote, never had that much appeal to me, even though I used Google Keep/Notes. It is not so bad, supports e.g. labeling and works with little friction on mobile. However, there is not even basic formatting support, let alone markdown formatting or code syntax highlighting.

Then, I discovered Notion.
At first, It wasn’t really about note-taking for me. I liked it mainly because of its database concept and for organizing third-party resources as described in the bookmarking section. However, by now, I use it heavily for almost all of my notes and other types of self-written content. I think Notion’s mix of markdown syntax support, slash commands, and WYSIWYG makes for a great writing experience.

I strongly recommend organizing all your notes in a single database with appropriate tags, such as meeting, presentation, tc, and question. Similar to the bookmarks database described above, Notion will automatically provide a column with the creation time of the note. If you have frequent meetings with the same people, you can think about adding a Participants column.

A notes database is the perfect use case for another great Notion feature: Templates. This feature will allow you to created templates for your different types of notes and then, for example, add a new meeting note with one click. Depending on the template, the new note comes with a pre-defined layout and might include fields for the meeting’s participants and agenda. This is what opens when I click to add new meeting notes in my database:

My basic meeting notes template.

My basic meeting notes template.

In my experience, it is best to keep project information as closely together as possible. For me, this means that issue tracking, merge requests (i.e. pull requests), documentation and everything else that comes with a software project should live alongside the source code.

This practice guarantees that anyone who works on the code now, or in the future, has access to all the information. My solution for this is to use GitLab for everything related to a specific project. This includes pretty much everything except some quick notes that I sometimes write in Notion because I do not want anyone else to see them.

To do this, GitLab offers a lot of project management and documentation features, such ass issue management, a project wiki, a Code snippet space for all projects and accounts, and more.
A project wiki is a great place for all documentation that is not specific to an issue. This can include the following:

  • Instructions for setting up the development environment
  • Instructions for manual deployment
  • Architecture descriptions and diagrams. I am a big fan of mermaid, which allows creating UML diagrams with Markdown and is supported by GitLab.
  • Guidelines regarding naming conventions, design paradigms, etc.

If you have read my post on running and hosting this website, you know that I use GitLab also for this personal blog project. Nevertheless, I use Notion for much of the project documentation and management tasks, because it comes with some unique benefits for this use case, as described above.

Of course, there are alternatives. GitHub has caught up in the past and offers a very similar feature set. I have only limited experience with Atlassian products, but I suppose you can achieve the same thing by using a mix of BitBucket, Confluence, and Jira.

On a side note, writing in Confluence feels a lot like writing in Notion. Both recognize markdown syntax and offer slash commands to quickly insert complex content types.

This one is relatively straightforward. I think the most important thing is to use some form of cloud storage. Personally, I use Dropbox.

Next, you should think about a proper folder structure. Finally, you should be able to search for your files quickly.

I use Linux and rely on Albert for searching through files. If this doesn’t work I use the locate command and, my last resort are command-line tools, such as find, and grep. They are still king for complex search requirements, such as regex matching and including also file content.

If you can tick the three boxes

  1. availability on all devices
  2. structure
  3. searchable

you should be all set.

Notion databases support file columns. This means you can upload a different file to each cell. I have not yet found a use case related directly to software engineering for this. However, it is great for managing invoices.

In this section, I want to present some special kinds of knowledge management use cases that are unique to software engineering. Most of them I have picked up properly only after starting to use Notion. By no means, I want to say that you should copy all of them, but you are very welcome to try out what you think looks interesting.

Code Snippets

Currently, I have two ways of collecting code snippets. I hope I can eventually integrate them somehow.

VSCode User Snippets

The first one are VSCode User Snippets. VSCode does many things right and snippet management is one of them. It is very easy to define new snippets and use them by typing a predefined string. Simply press CtrlShiftp and choose Preferences: Configure User Snippets. Then, you can split your snippets into multiple files or just put all of them together in the global-snipptes.code-snippets file. These are some of my snippets for writing blog posts with markdown and Hugo:

"figure": {
    "scope": "markdown",
    "prefix": "fig",
    "body": [
        "{{< figure src="http://www.webdesignernews.com/images/"  class="center-figure"  caption="" >}}"
    "description": "Insert hugo figure shortcode"
"callout": {
    "scope": "markdown",
    "prefix": "callout",
    "body": [
        "{{% callout "warning info tip note" %}}",
        "{{%/callout %}}"
    "description": "Insert hugo callout shortcode"
"code": {
    "scope": "markdown",
    "prefix": "code",
    "body": [
        "{{< highlight bash "linenos=table">}}",
        "{{< / highlight >}}"
    "description": "Insert hugo code shortcode"

The snippets can then be used by typing the prefix and pressing Ctrl Space:

Notion Snippet database

The second use case concerns small pieces of code that I have written myself and that I would like to remember. This is not necessarily for productivity reasons, but rather for nostalgia. It is ideal for small algorithms, for example when you manage to substantially increase the performance of something by applying some clever technique.

I keep those snippets in a Notion database, which works great because I can tag them and add arbitrary properties. A nice benefit of this habit is that it’s not a problem if you don’t do it for a while. Your database will not look messy all of a sudden.

An exemplary entry of my code snippets database in Notion.

An exemplary entry of my code snippets database in Notion.

Shortcut Database

I believe that typing proficiency is important for a software engineer. It is easier to stay in flow if you can type fast and navigate around your IDE and tools with shortcuts. As my set of tools grew, I realized that I have a hard time to keep track of which shortcuts I am using, which of them work in which application, and whether I can change them so that they work across as many tools as possible.

Therefore, I started to keep track of some shortcuts I would like to use everywhere. These are things like creating new tabs, creating new windows, opening the respective tool’s search feature and more.

This database helps me identify overlaps and motivates me to think about which task could fit for a specific key-combination that I use often. For example, I use CtrlShiftT in IDEs to search for type definitions and CtrlShiftR to search for resources in general. Since I have these key combinations memorized for years and can use it very quickly, I want to make use of it wherever possible. Therefore, I try to set up my applications so that some kind of search opens whenever I use one of these combinations. My Notion database makes it easy to see which applications offer search features and which of them I can modify.

For example, my Browser now forwards CtrlShiftT / R to Workona and allows me to switch/search workspaces or use the general search, respectively.

Screenshot of my Notion shortcut database

Screenshot of my Notion shortcut database

Notion’s filtering and search capabilities are great for identifying problems with your setup. For me, a problem is when two applications use different shortcuts for a very similar operation. Unfortunately, Notion is negative example of an application that doesn’t let you customize its shortcuts and therefore reduces the usefulness of my database. If someone from the Notion team reads this at some point, I kindly ask you to lobby on my behalf to prioritize personalized keyboard shortcuts.

I am excited about this use case and will talk about it in much more detail in a future blog post.

I also have a Notion database to keep track of small web tools that I use. This serves little purpose for things that are used frequently as you will remember them anyway. However, for minor things that are only needed once in a while, such as color palette generators, detecting handwriting as LaTeX symbols, and checking SSL certificates this is quite handy. Usually, you can find these tools via google, but sometimes there are many tools for a task and it is nice to keep track of the ones that work best.

Screenshot of my software tools database in Notion.

Screenshot of my software tools database in Notion.

Again, the important thing here is to not overdo it. There are extensive publicly available lists of such tools. Your list should not be a copy of those but rather a small collection of the things you like and need from time to time.

Collecting Ideas for new Projects

Like many software engineers in these golden times, I sometimes think about starting new software projects, maybe even bootstrapping a small business.

You probably guessed it already, I keep ideas for possible projects in a Notion database. The main benefit is, once again, that it allows me to easily link it with other databases, such as my bookmarks.

Similar posts often conclude with warnings that you should be very careful not to spend too much time on organizing and maintaining your knowledge and workflows. In principle, I agree with this sentiment. After all, you want to increase your productivity.

I do believe, however, that it is worth it to spend some time thinking about your knowledge management system. The most important part about conceptualizing a system is to decide exactly which types of information you want to maintain in your knowledge base. If you get this right you will benefit for the rest of your career. Even if the tools might change in the future, the system will stay.

Please don’t hesitate to share your own experiences in the comments below. I am sure that my workflows are not perfect, but they might get closer with your tips.

This post is released together with a new post in my Reading List series, which lists related resources. If you want to dive even deeper into knowledge management concepts, you might want to have a look.


Is your fancy new domain hurting your performance? Benchmarking the top-level domain names

Posted by: 

Dejan Grofelnik Pelzel

Dejan Grofelnik Pelzel

December 20th, 2019

As a CDN we’re constantly trying to squeeze every last millisecond of load time. We recently had a user come to us with a list of performance tests on their site, reporting DNS responses often taking over 150 ms.

After a couple of checks, it turned out that the problem actually lied with the .xyz top-level domain itself which took over 100 ms just to resolve the nameserver data for their domain.

While this came as a bit of a surprise, similar issues in the past were actually the reason why we switched from using a .zone domain to .net for our shared domain. With so many new top-level domains recently popping up, we decided to check how well they actually performed and how much of an actual impact they can have.

But wait, how can the domain itself increase the load time?

To explain, we first need to look deeper into how DNS functions.

When resolving a domain such as test.xyz, the DNS resolver will first need to contact the top-level DNS servers from .xyz to get the list of nameservers responsible for the domain. While this might be already cached by the resolver such as your ISP, it might not always be the case depending on the popularity of your domain.

Only once the resolver has this information, it can actually query the received nameservers to get the final address for test.xyz.

This means that if the DNS for the top-level domain is slow, it can actually delay the DNS resolution for the domain itself and in a very unlikely worst case scenario even cause an outage.

Benchmarking Time

That being said, benchmarking the performance is fairly easy. Since we run a global network, it allowed us to monitor the performance worldwide from over 50 locations and networks.

For each top-level domain, our system picked a random nameserver published for each of the top-level domains and queried a random domain name that we picked for it. We then grouped the results by region and logged the data every 10 seconds.

The Results

We tested 42 of some of the most popular top-level domains and aggregated the results into a global median average and a 85 percentile aggregation (meaning 15% of the responses were slower than this time). The real world results might vary slightly since all of these tests were performed from our network only, but they should offer a good general overview.

In the end, some of the results were quite surprising.

The biggest shockers were the .info and .org domains that showed really poor performance especially in the 85 percentile range, despite being one of the oldest and well established top-level domains with millions of registered domains each. After some further investigation it appears 4 out of 6 of their nameservers are performing extremely poorly which is the reason for the poor results.

The .net and .com were very slightly slower than we expected in Europe and North America, but otherwise offer great and stable performance across all regions as we can see in the global median.

Another interesting thing to see was the performance for .co, .biz and .in domains that ended up way ahead of the rest. The .net and .com have much larger networks, but these might be a very interesting choice if you’re looking to get the absolute maximum performance.

Unfortunately, our suspicions about some of the cool and shiny new domains that have begun popping up recently were confirmed.

While we were delighted to see some of the new top-level domains performing really well such as the .live, .email, .news, etc. which are all run by the same company, we also saw a very large performance drop in many other cases. Some of the domains performed multiple times slower than average and showed average query times well over 100 ms. Many of them also had a big performance drop-off in regions outside of Europe and North America, making the issue even worse there.

We tested 42 of some of the most popular top-level domains out of 600 available, so we could assume many of those might not perform much better.

The Conclusion

So, does this mean you should immediately cancel all of your domains and go for a .co or .biz domain to increase performance?

Most definitely not. In many cases, the DNS responses are heavily cached and especially for very popular websites the resolvers might not need to hit the top-level nameservers much at all. Choosing a domain that fits your brand is in most cases much more important than gaining those extra 50 milliseconds of load time for the first page load.

However, if you are trying to squeeze absolutely every last bit of performance and ensure great reliability in a system where every last millisecond matters, then you might want to think twice before choosing your domain. The differences aren’t huge, but if you’re aiming to hit that one second loading time, then things do add up, in some cases even as much as 200 ms.

So is your fancy new domain hurting your performance? It actually might be, but probably not enough to worry about that too much.

The article was updated on 20th of December with more domains and a median 85 percentile aggregation instead of averaging the results for more accurate representation of the performance.


A few months ago, I found myself oddly transfixed by an advertisement on the train for a mattress company called Allswell. At first, I couldn’t tell what made the ads stand out from the dozens plastered across the trains and platforms at any given time. After several train rides spent staring up at the unnaturally bright smiles of models happily perched on mattresses, I realized: it wasn’t the ads themselves that stood out to me, but rather, their typography.

The Allswell logo uses Caslon Graphique, a striking, elegant font that lends an air of luxury and sophistication to a relatively young brand. Caslon Graphique belongs to a category of fonts known as Didones. Didones are serif fonts, meaning that, unlike the font that you’re currently reading, the strokes of letters have little feet at the ends. A Didone is characterized by long, narrow serifs, as well as a strong contrast between thick and thin strokes (see the difference in the curve and the crossbar of the ‘e’ in Allswell). Together, these elements give a Didone font a certain, unmistakably refined quality that is often absent from sans serifs or even more traditional serifs such as Times New Roman.

Didones first came into fashion around the turn of the nineteenth century, when printers and type designers began experimenting with alterations to the more traditional serif fonts that had defined newsprint and advertisements throughout the 1700s. Firmin Didot of France and Giambattista Bodoni of Italy were the founding fathers of the style, which was then called “Modern.” “Didone,” coined long after their deaths, is a portmanteau of “Didot” and “Bodoni,” which are also the names of two long-enduring Didones still used today.

Over the past year or so, Didones have covertly crept to the foreground of visual branding design for new companies and startups. There’s Winc, the millennial-targeted wine club; Dame, the sex toy company; Everspring, Target’s ecofriendly line of cleaning products; Welly, which makes first aid products; the list goes on, all with sleek, Didone logos are at the center of their brands. It’s a trend, but it’s also much more than that: the sudden insurgence of Didones represents a rejection of the typography and aesthetics that have come to define the 2010s, and an attempt to carve out a new aesthetic space, just in time for the beginning of a new decade.

In order to comprehend the significance of the budding Didone empire, one must understand what has happened to typography in the past ten years or so. At the beginning of the 2010s, geometric sans serifs — those without serifs or contrast in stroke width, and whose letters are built around simple shapes like circles and squares — experienced a dramatic uptick in popularity within web and digital design. As the geometric sans serifs rose to prominence, elements such as drop shadows, gradients, background textures, and bevels began to fall away, leaving behind the flat, minimalist digital aesthetic you might see on Facebook, Airbnb, or Postmates. Many designers cited a desire for increased legibility on low-resolution screens as reason for the shift; the desire for increased page-load speeds likely also played a role.

By 2015, when Google and Facebook pivoted to geometric sans serif logos within a few months of each other, geometric sans serif typography and minimalist aesthetics had reached a saturation point, both online and off. Among Peak Minimalism’s alleged merits were its implication of transparency: the spareness of brands stripped of clutter and ornament felt trustworthy, as if excesses in style were a middleman between consumer and company that had been stripped away. In the later half of the 2010s, however, oversaturation led geometric sans serifs to grow somewhat stale. The same attributes that once signalled approachability and friendliness began to read as sterile and impersonal as they grew more and more ubiquitous, particularly among large corporations and tech companies.

Didones represent a complete about-face from the design ethos of Peak Minimalism. On a technical level, Didones and geometric sans serifs are more or less total opposites: serif versus sans serif, intense stroke contrast versus none at all, tall ascenders (letters like “h” and “t”) versus short ones. But there’s also a more extensive rejection of the 2010s aesthetic at play. Against the no-frills, cheerfully pared-down look of Google et al, the use of Didones in the context of marketing feels downright luxurious, whether that sense of luxury is applied to a mattress or a vibrator or even a first aid kit. It’s worth noting that in addition to the clear generational dynamics at play — the majority of these brands appear to be marketing towards young people primarily — the new Didones seem to appear most often alongside brands that market towards women, be it Dame, Modcloth (a women’s clothing retailer), Flesh (a shade-inclusive makeup brand), or Kirsten Gillibrand’s brief presidential bid.

Earlier this year, Eliza Brooks suggested in an article for Vox that the return of serifs more broadly represents a retreat into the past, specifically, to the groovy aesthetic of the 1970s. This influence is more obvious in typography such as that of Buffy, a comforter startup which uses Cooper Black for its logo and branding, or Chobani, which opted for a chunky new text in 2015. Perhaps the change isn’t so much a retreat to the aesthetic of a particular historical period, but rather, the inevitable swing of a pendulum of which one side has consistently been some form of minimalism, be it the Swiss modernism of the 1950s or the flat design of the 2010s.

The internet has changed the conditions of graphic design in ways that are less subject to passing whims than typography. The need for cohesion across print and digital platforms — for your company’s ads in the subway to look like its ads on Instagram — has led to a minimalist zeitgeist that isn’t going away any time soon. Within the broader minimalist framework, however, ornate flourishes such as that of the Didones sate their viewers’ need for a reprieve from the visual austerity of the past decade, and the political austerity for which it has served as the default style. Sitting on the train, I found myself captivated by an advertisement for mattresses I can’t afford, of all things, simply because its typography injected a moment of beauty into a day spent being bombarded by advertisements that, with rare exceptions, look more or less the same.

That Didones represent a break from design homogeneity right now doesn’t mean that we might not face a new, Didone-centered homogeneity in ten years or so. It’s too early to tell what the visual language of the 2020s will look like more broadly, whether it will resemble the 2010s but with different typography, or if the Didones will helm an entirely new style. In any event, it seems that our relationship to typography and design is at the precipice of a transformation, and I, for one, am excited to see what it brings.

Rachel Hawley is a freelance writer and graphic designer based in Chicago.


Google has announced which search engines will appear on its default search provider screen for Android users in the EU. Bing will appear as an option in the United Kingdom only, while DuckDuckGo and Info.com will appear alongside Google in all 31 EU territories, The Verge first reported Thursday.

Why is Google doing this? The Android search choice screen is part of Google’s efforts to comply with the European Commission’s July, 2018 antitrust ruling.

An example of the search choice screen. Source: Google.

How it works. Beginning on March 1, the search choice screen will be shown to users in the EU who are setting up their Android devices for the first time. Google, DuckDuckGo, Info.com and one other search engine (which varies depending on the region) will appear as options.

When the user selects their desired provider, the associated application will be installed and it will become the default search engine in Chrome and on the home screen search box.

Why these particular search engines? The search providers were selected using an auction method in which Google accepted bids on how much each provider was willing to pay per user that selects their search engine as the default.

The auctions were conducted on a per-country basis, using a “fourth-price” system, where the three winners pay Google the amount of the fourth-highest bid. The search providers selected in the first auction cycle will appear on the search choice screen for four months (until June 30), with future auctions occurring quarterly.

Auctions aren’t fair for all. “We believe this auction is at odds with the spirit of the July 2018 EU Commission ruling,” said Christian Kroll, CEO of Berlin-based search engine Ecosia, in a statement sent to Search Engine Land.

Ecosia uses its profits for reforestation efforts. “Planting trees in biodiversity hotspots is our priority, this means that biddings processes like this cut out purpose driven search engines like Ecosia,” said Kroll. Ecosia declared that it would boycott the auction shortly after the methodology was first announced in August 2019.

Where is Bing? Microsoft’s search engine is noticeably absent as a search choice except in the United Kingdom, where search ad revenue is likely to be higher than in nations with a lower GDP, The Verge noted.

Why we care. Despite offering users in the EU with three other search providers to choose from, it is not likely to make a big dent in Google’s share of the market as search is heavily integrated with its other Android apps and Google is what the majority of users are accustomed to.

Since an auction method was used, the choices merely reflect which search engines were willing to pay the most per user and is not indicative of search market share in those respective countries.

About The Author

George Nguyen is an Associate Editor at Third Door Media. His background is in content marketing, journalism, and storytelling.


When writing code for the web, eventually you’ll need to do some process that might take a few moments to complete. JavaScript can’t really multitask, so we’ll need a way to handle those long-running processes.

Async/Await is a way to handle this type of time-based sequencing. It’s especially great for when you need to make some sort of network request and then work with the resulting data. Let’s dig in!

Promise? Promise.

Async/Await is a type of Promise. Promises in JavaScript are objects that can have multiple states (kind of like the real-life ones ☺️). Promises do this because sometimes what we ask for isn’t available immediately, and we’ll need to be able to detect what state it is in.

Consider someone asks you to promise to do something for them, like help them move. There is the initial state, where they have asked. But you haven’t fulfilled your promise to them until you show up and help them move. If you cancel your plans, you rejected the promise.

Similarly, the three possible states for a promise in JavaScript are:

  • pending: when you first call a promise and it’s unknown what it will return.
  • fulfilled: meaning that the operation completed successfully
  • rejected: the operation failed

Here’s an example of a promise in these states:

Here is the fulfilled state. We store a promise called getSomeTacos, passing in the resolve and reject parameters. We tell the promise it is resolved, and that allows us to then console log two more times.

const getSomeTacos = new Promise((resolve, reject) => {
  console.log("Initial state: Excuse me can I have some tacos");

  .then(() => {
    console.log("Order some tacos");
  .then(() => {
    console.log("Here are your tacos");
  .catch(err => {
    console.error("Nope! No tacos for you.");
> Initial state: Excuse me can I have some tacos
> Order some tacos
> Here are your tacos

See the Pen

Promise States
by Sarah Drasner (@sdras)

on CodePen.

If we choose the rejected state, we’ll do the same function but reject it this time. Now what will be printed to the console is the Initial State and the catch error:

const getSomeTacos = new Promise((resolve, reject) => {
  console.log("Initial state: Excuse me can I have some tacos");

  .then(() => {
    console.log("Order some tacos");
  .then(() => {
    console.log("Here are your tacos");
  .catch(err => {
    console.error("Nope! No tacos for you.");
> Initial state: Excuse me can I have some tacos
> Nope! No tacos for you.

And when we select the pending state, we’ll simply console.log what we stored, getSomeTacos. This will print out a pending state because that’s the state the promise is in when we logged it!

> Initial state: Excuse me can I have some ?s
> Promise {}
> Order some ?s
> Here are your ?s

What then?

But here’s a part that was confusing to me at first. To get a value out of a promise, you have to use .then() or something that returns the resolution of your promise. This makes sense if you think about it, because you need to capture what it will eventually be — rather than what it initially is — because it will be in that pending state initially. That’s why we saw it print out Promise {} when we logged the promise above. Nothing had resolved yet at that point in the execution.

Async/Await is really syntactic sugar on top of those promises you just saw. Here’s a small example of how I might use it along with a promise to schedule multiple executions.

async function tacos() {
  return await Promise.resolve("Now and then I get to eat delicious tacos!")


Or a more in-depth example:

// this is the function we want to schedule. it's a promise.
const addOne = (x) => {
  return new Promise(resolve => {
    setTimeout(() => { 
      console.log(`I added one! Now it's ${x   1}.`)
    }, 2000);

// we will immediately log the first one, 
// then the addOne promise will run, taking 2 seconds
// then the final console.log will fire
async function addAsync() {
  console.log('I have 10')
  await addOne(10)
  console.log(`Now I'm done!`)

> I have 10
> I added one! Now it's 11.
> Now I'm done!

See the Pen

Async Example 1
by Sarah Drasner (@sdras)

on CodePen.

One thing (a)waits for another

One common use of Async/Await is to use it to chain multiple asynchronous calls. Here, we’ll fetch some JSON that we’ll use to pass into our next fetch call to figure out what type of thing we want to fetch from the second API. In our case, we want to access some programming jokes, but we first need to find out from a different API what type of quote we want.

The first JSON file looks like this- we want the type of quote to be random:

  "type": "random"

The second API will return something that looks like this, given that random query parameter we just got:

  "en":"For all its power, the computer is a harsh taskmaster. Its programs must be correct, and what we wish to say must be said accurately in every detail.",
  "author":"Alan Perlis",

We call the async function then let it wait to go retrieve the first .json file before it fetches data from the API. Once that happens, we can do something with that response, like add it to our page.

async function getQuote() {
  // get the type of quote from one fetch call, everything else waits for this to finish
  let quoteTypeResponse = await fetch(`https://s3-us-west-2.amazonaws.com/s.cdpn.io/28963/quotes.json`)
  let quoteType = await quoteTypeResponse.json()

    // use what we got from the first call in the second call to an API, everything else waits for this to finish
  let quoteResponse = await fetch("https://programming-quotes-api.herokuapp.com/quotes/"   quoteType.type)
  let quote = await quoteResponse.json()

  // finish up

We can even simplify this using template literals and arrow functions:

async function getQuote() {
  // get the type of quote from one fetch call, everything else waits for this to finish
  let quoteType = await fetch(`quotes.json`).then(res => res.json())

    // use what we got from the first call in the second call to an API, everything else waits for this to finish
  let quote = await fetch(`programming-quotes.com/${quoteType.type}`).then(res => res.json())

  // finish up


Here is an animated explanation of this process.

See the Pen

Animated Description of Async Await
by Sarah Drasner (@sdras)

on CodePen.

Try, Catch, Finally

Eventually we’ll want to add error states to this process. We have handy try, catch, and finally blocks for this.

try {
  // I’ll try to execute some code for you
catch(error) {
  // I’ll handle any errors in that process
finally {
  // I’ll fire either way

Let’s restructure the code above to use this syntax and catch any errors.

async function getQuote() {
  try {
    // get the type of quote from one fetch call, everything else waits for this to finish
    let quoteType = await fetch(`quotes.json`).then(res => res.json())

      // use what we got from the first call in the second call to an API, everything else waits for this to finish
    let quote = await fetch(`programming-quotes.com/${quoteType.type}`).then(res => res.json())

    // finish up

  catch(error) {
    console.warn(`We have an error here: ${error}`)


We didn’t use finally here because we don’t always need it. It is a block that will always fire whether it is successful or fails. Consider using finally any time you’re duplicating things in both try and catch. I usually use this for some cleanup. I wrote an article about this, if you’re curious to know more.

You might eventually want more sophisticated error handling, such as a way to cancel an async function. There is, unfortunately, no way to do this natively, but thankfully, Kyle Simpson created a library called CAF that can help.

Further Reading

It’s common for explanations of Async/Await to begin with callbacks, then promises, and use those explanations to frame Async/Await. Since Async/Await is well-supported these days, we didn’t walk through all of these steps. It’s still pretty good background, especially if you need to maintain older codebases. Here are some of my favorite resources out there:


Ad measurement platform DoubleVerify has launched Authentic Performance, a new predictive ad solution for reporting expected ad performance based on real-time ad exposure and engagement data.

The solution analyzes more than 50 data points, including viewable time, share of screen, video presentation and audibility as well as engagement data at the device or ad level, such as screen orientation, video playback and audio control interactions.

Why we care

The aim is to give advertisers more data, faster about the expected impact of their advertising spend by analyzing a breadth of metrics, in flight. Advertisers can use the insights to identify how ads are either working or underperforming — at the impression-level — and make adjustments accordingly.

“Current tools are either fast but simplistic — with metrics like viewability and click-through rate; or, they’re sophisticated but slow — with data reported post-campaign,” said DoubleVerify CEO Wayne Gattinella, “Authentic Performance is designed to provide predictive data that are correlated with digital advertising effectiveness — in real-time.”

More on the news

  • CPG giant Mondelez is among the customers testing the new solution, DoubleVerify said.
  • Authentic Performance builds on DoubleVerify’s existing ad measurement solution Authentic Impression, an MRC-accredited metric for viewability and brand safety.
  • DoubleVerify integrates with a number of programmatic ad platforms, including DoubleClick, AppNexus, The Trade Desk, Amobee, Tremor Video and more.

About The Author

Amy Gesenhues is a senior editor for Third Door Media, covering the latest news and updates for Marketing Land, Search Engine Land and MarTech Today. From 2009 to 2012, she was an award-winning syndicated columnist for a number of daily newspapers from New York to Texas. With more than ten years of marketing management experience, she has contributed to a variety of traditional and online publications, including MarketingProfs, SoftwareCEO, and Sales and Marketing Management Magazine. Read more of Amy’s articles.


Marketers face countless complexities and nuances when it comes to tasks such as attribution reporting, A/B testing and account-based marketing. Whether cobbling together data from different platforms to create a report or spending hours analyzing performance, these challenges can add up to frustrating experiences and under-utilized martech systems.

Marketing platform provider HubSpot has added several key features to its Marketing Hub Enterprise offering in hopes of providing a more user-friendly experience for enterprise users. The new features include revenue attribution reporting, intelligent A/B testing, partitioning and account-based marketing tools, among others.

Why we care

Revenue attribution has plagued marketers for years, especially as new digital channels have come into play. HubSpot’s interface and design are configured to help marketers identify what touchpoints and channels drive the most value from their efforts.

Source: HubSpot

A/B testing requires a heavy lift on the end-user and could pose risks. It can be a tedious process that ultimately relies on the marketer to determine the outcome and wastes opportunities for conversions from customers who view the losing variation during the test.

“Adaptive testing solves both problems,” said HubSpot CMO Kipp Bodnar. “It uses reinforcement learning, a type of machine learning, to continuously optimize the traffic distribution of a test based on conversion rate. And, in the end, it picks a winner for you. No need to manually check-in. No need for a statistical significance calculator. As an added bonus, because it shows the best-performing variation to the user more often over time, it improves the customer experience, too.”

More on the news

  • “There are no data requirements to use Adaptive Page Testing,” said Bodnar. “Pages with higher website traffic will have better results, as you might expect with a data-driven ML model.”
  • ABM and asset partitioning features are currently in beta.
  • HubSpot also increased limits on workflows, lists and more to expand capacity for its users.

About The Author

Jennifer Videtta Cannon serves as Third Door Media’s Senior Editor, covering topics from email marketing and analytics to CRM and project management. With over a decade of organizational digital marketing experience, she has overseen digital marketing operations for NHL franchises and held roles at tech companies including Salesforce, advising enterprise marketers on maximizing their martech capabilities. Jennifer formerly organized the Inbound Marketing Summit and holds a certificate in Digital Marketing Analytics from MIT Sloan School of Management.


Spotify has rolled out a new suite of marketing tools for podcast advertisers, making it possible to view impressions, frequency, reach and audience demographic information for podcast ads, the company announced Wednesday.

The new metrics tools are supported by Spotify’s new podcast ad technology, Streaming Ad Insertion (SAI), which utilizes data from the platform’s logged-in audience in tandem with its streaming audio service.

Source: Spotify.

New insights. Spotify is offering podcast publishers and advertisers data on real-time ad impressions, reach (the number of unique listeners who heard an ad), frequency (the number of times a listener heard an ad) and anonymized audience information such as age, gender, device type and listening preferences.

The anonymized demographic data first arrived for podcast publishers in August, 2019.

Why the data is available now. Spotify requires users to log in, providing the company with basic demographic data on the listener. The shift in audience preferences from downloading episodes via RSS feeds to streaming them has enabled companies to extract more information on listening habits.

Why we care. The podcast sector has grown by leaps and bounds, but for all its popularity, the lack of targeting and reporting data has kept many brands from investing in podcast ads. 

Now that advertisers and publishers can get more precise information on whether their ads are getting listened to, who and how many people are listening to them, the medium is more transparent and more likely to attract new advertisers, which may help to continue the sector’s momentum.

About The Author

George Nguyen is an Associate Editor at Third Door Media. His background is in content marketing, journalism, and storytelling.


Mailchimp has released its first annual report since 2016, highlighting a number of key milestones for the email platform turned all-in-one marketing platform. The company claims a 60.51% share of the email industry, supporting over 12 million customers. Its closest competitor comes in at only 9.52%.

Why we care

Mailchimp — along with a number of other email providers built to serve small businesses — added more marketing features to its platform last year, but the report indicate that email continues to be the platform’s strongest feature. Email isn’t dead, it’s evolving.

The volume of emails sent from Mailchimp in 2019 — over 340 billion — draws attention to the strength of the channel and the email market as a whole. The company’s reporting on its Thanksgiving weekend performance offers a glimpse at just how powerful the channel is. Nearly 9 billion emails were sent from the platform, generating $4.5 billion in orders.

More on the news

  • Mailchimp customers generated more than $321,538,039 in revenue using Mailchimp product recommendations.
  • The company has also launched its own content channel called Mailchimp Presents which generated over 1.5 million views and downloads.

About The Author

Jennifer Videtta Cannon serves as Third Door Media’s Senior Editor, covering topics from email marketing and analytics to CRM and project management. With over a decade of organizational digital marketing experience, she has overseen digital marketing operations for NHL franchises and held roles at tech companies including Salesforce, advising enterprise marketers on maximizing their martech capabilities. Jennifer formerly organized the Inbound Marketing Summit and holds a certificate in Digital Marketing Analytics from MIT Sloan School of Management.


It’s an understatement to say that gaming has become a juggernaut in the realm of entertainment. So much so that in 2018, global gaming revenue was $134 billion, dwarfing the combined worldwide revenue from movie box office ($41 billion), video streaming ($36.6 billion) and music ($19 billion). 

So why has gaming become the number one source for entertainment?  Because it’s more than just entertainment. Gaming taps into deep-seated human motivations.

“We’ve been playing games since humanity had civilization – there is something primal about our desire and our ability to play games. It’s so deep-seated that it can bypass latter-day cultural norms and biases.”

– From Reality is Broken by Jane McGonigal

As Jane McGonigal points out in her book, games draw us into their world, challenge us to be creative and adapt to rapidly changing situations, and reward us with a sense of accomplishment, personal mastery and autonomy.

The question is: does gamification have a place in the world of business?

The answer is yes, especially in situations where an organization is trying to make a cultural change stick.

Marketers are hard-wired to achieve and be part of something greater than themselves. Therefore, gamifying employee experiences that educate, motivate and shape behaviors is a natural extension of our increasingly gamified culture.

But to do it well, organizations must consider:

  • Is the experience designed to drive action among employees? 
  • Is it linked to a greater organizational purpose?
  • Do the rewards create an enhanced sense of accomplishment?
  • Is it truly fun?

Taken in this light, gamification can truly deliver on its promise to engage employees and serve as a catalyst for personal growth and cultural change. Game on.

Soapbox is a special feature for marketers in our community to share their observations and opinions about our industry. You can submit your own here.

Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.

About The Author

As Chief Innovation Officer (CInO) of Gongos, Inc., Greg is charged with accelerating the future of everything – from trends and foresights to product innovation and development, to the company’s growth and performance. Greg thrives on exploring societal and technological shifts that point to disruptive ways to create value for consumers and resilience for organizations. Greg leads the company’s Innovation Think Tank – a cross-generational team that fosters a culture of innovation and guides long-term strategy in shaping the decision intelligence space. A former research practitioner with over 20 years of experience under his belt, Greg is a visionary at heart. He believes our industry is in the midst of a revolution, and plans to help pave the way. He holds an M.A. in Humanistic and Clinical Psychology from the Michigan School of Professional Psychology, and a B.S. in Industrial Administration, Marketing and Finance Concentrations from Kettering University.


Business rich communication services (RCS) messaging has been waiting for its turn in the spotlight for well over a decade now. 

The long-awaited replacement to short message service (SMS) was first conceived back in 2008 but the progress of the new messaging platform has been stuttering at best. Many would call it an utter shambles.

For consumers at least, RCS is now available to all android users in the U.S. as Google rolled out the new messaging platform in December.

But where does this leave businesses and organizations that want to use RCS to market and communicate with their customers?

What is business RCS messaging?

RCS is the evolution of the clunky and outdated text message. SMS is over 25 years old and despite being an incredibly powerful and responsive channel, its replacement is well overdue.

With RCS you can send images, video and share files. The new protocol supports read receipts and read notifications. Essentially it does everything that you’d want in a messaging app.

It doesn’t have any killer features to trump other apps but has the distinct advantage that it works in the same way that SMS does. As long as you have a mobile number, you’ll be able to send an RCS message.

At least, that’s the theory.

Business RCS faces multiple challenges

This leads us to the long list of problems that RCS faces before it can become a viable alternative to SMS.

Marketers are longing to get their hands on it. It could transform the way they communicate with their customers. But until all the issues are solved, RCS could be a spectacular failure.

On evaluating its chances of success, Tim Green from Mobile Ecosystem Forum said:

 “RCS is – for now at least – classic ‘vapourware’. It’s something that might take off… at some point in the future… we’re not sure when. “

– Tim Green

It’s highly unlikely that Apple will ever adopt RCS business messaging

It’s becoming increasingly clear that Apple is not going to adopt business RCS in the foreseeable future. It competes with its own iMessage and there is no reason why Apple would want its iPhone users moving away from it.

In a recent interview, Nick Lane from the UK-based research company Mobilesquared, said:

“In terms of supporting RCS business messaging, that won’t happen. Why would they want to do that?”

Nick Lane

As an alternative to RCS, marketers will need to deliver a text message to iPhone users containing a link to a mobile-optimized landing page that replicates the content of the RCS message.

It’s going to be highly complex and expensive to implement. For most, the benefits will not outweigh these additional hurdles.

Business RCS pricing is still unresolved

Industry insiders have been demanding some sort of resolution on pricing for the past three or four years and yet, we’re still waiting for clarity on what sending a business RCS is going to cost.

Nick Lane at Mobilesquared referred to the lack of progress as “groundhog day.”

A few pricing models have been suggested but these have all been complex or unrealistic.

RCS is different from SMS in that it encourages ongoing communication, rather than a simple one-way push of information. An RCS message flow might include multiple messages and interactions.

So unlike SMS, you need to have a pricing model that allows for multiple RCS but without the costs spiraling out of control.

One suggestion is to have a “cost per session.” Once an RCS conversation has begun, you simply pay a flat fee for a set period, probably 24 hours.

Another more outlandish idea is to set the price of RCS based on the profit that results from the RCS exchange. Thankfully this idea seems to have quietly disappeared. It would have been unworkable and impossible to regulate.

RCS tokens

Mobilesquared appears to be taking the lead on pricing with its token-based proposal.

Their suggestion is that you would purchase a block of RCS tokens from your RCS provider. Different types of RCS messages would use different numbers of tokens.

If you were sending a simple image then you might use one token but if you had a scrollable element or button options, then you would use two.

How this would work in reality is unclear, as there will be almost infinite numbers of ways that an RCS message could be presented.

The whole pricing of business RCS is still in utter chaos and without a universally agreed approach, RCS simply can’t be used.

RCS messaging is not encrypted and can lead to hacking

Surprisingly RCS, unlike most other messaging apps, is not end-to-end encrypted. It uses the same rules and protocols as SMS.

Network providers can keep records of all RCS messages in a fully readable format and these could be accessible to anyone with legitimate legal access.

More recently there have been troubling security concerns when German security company SRLabs demonstrated that RCS could allow hackers to access SMS and voice data. Sloppy implementation of RCS by both Google and carriers could allow messages and calls to be intercepted or altered at will.

SRLabs founder Karsten Nohl argues that:

“RCS gives us the capability to read your text messages and listen to your calls. You’re going to be more vulnerable to hackers because your network decided to activate RCS. If you put out a new technology for a billion people, you should define the whole security concept.”

– Karsten Nohl

10 years on and we’re still in case study mode

As the years drift past, we still haven’t seen any examples where companies are using RCS as an integrated part of their customer communications.

There have been some very eye-catching case studies, notably Subway who achieved a 140% uplift in sales with their RCS campaign in early 2019. 

The latest RCS campaign from Papa Johns in the UK achieved a 23% uplift compared to SMS.

But these campaigns are all one-off marketing tactics. You send an RCS message and then measure the sales and compare it to your usual response rates. Then you dash off a press release, trumpeting your success which is then gleefully published in the world’s marketing press.

But this is all a bit thin. We don’t see any companies who have taken RCS to the next stage of its development.

Until we see examples where a company has integrated RCS into all their other communications and tackled the complexity of having a solution for iPhone users, RCS will remain on the fringes, with promising potential but never quite delivering.

Unsustainable response rates

The fast food RCS case studies have produced impressive response rates. But it’s unlikely that these sky-high results will be sustainable.

As users become accustomed to the new messaging, then response rates are likely to settle down to levels that are similar to SMS.

Unless you believe that RCS messaging can increase the amount of pizza sold globally, then we’re bound to witness a readjustment to the high levels of engagement that the early case studies delivered.

A long and uncertain road ahead for RCS

With the catalog of issues facing RCS business messaging, it’s hard to predict when and even if it will become a credible option for marketers.

Just one of the problems would be challenging enough but with so many forces working against it, RCS may never emerge as a fully formed communication platform.

Optimism, goodwill and great case studies will only get the new channel so far. If RCS is to become a success, we need to see wide adoption in the next 12 months or it may sink without trace.

Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.

About The Author

Henry is a two-time entrepreneur, founder and director of The SMS Works, a low cost and robust SMS API for developers. He’s been involved in SMS and mobile marketing since 1999 and helps companies of all sizes develop integrated mobile strategies. Henry also writes on a range of topics from mobile marketing to entrepreneurship.


Customers — especially end-users — tend to be straightforward with store and customer service staff as well as regular vendor contacts, particularly when it comes to product feedback. That feedback doesn’t always reach the product management team, but can be critical for identifying key product gaps, understanding real use cases, customer challenges and satisfaction.

Customer experience management provider Qualtrics has launched a new collaboration tool to help capture that feedback from employees interacting directly with customers. Frontline Feedback serves as a platform for customer experience teams to collect, analyze and make informed decisions related to their customers based on feedback from employees who are engaged with the account on a regular basis.

Why we care

“Call center teams or retail store staff, for example, are often in the best position to capture a broad range of customer experiences and interactions that helps improve an organization’s product and brand perception,” Qualtrics said. These employees are typically underutilized despite their direct connections to customers and users.

FrontLine Feedback provides a centralized location for organizations to compile, analyze and act on this data. By collaborating on accounts through a single platform, cross-functional communication will help close these feedback loops and could help product owners identify trending matters.

outside-in approach to optimizing your CX program engages employees to help
organizations close customer experience gaps, improve product experiences, and
deliver on their brand promise,” said Rupali Jain, Qualitrics director of
product manager for CustomerXM.

More on the news

  • FrontLine feedback is currently available to organizations using Qualtrics CustomerXM.
  • Key features include advanced text analytics that deliver insights and recommended actions for the appropriate teams.
  • Qualtrics, which was acquired by SAP in late 2018, published research [download required] indicating that individuals working closely with customers understand them best.

About The Author

Jennifer Videtta Cannon serves as Third Door Media’s Senior Editor, covering topics from email marketing and analytics to CRM and project management. With over a decade of organizational digital marketing experience, she has overseen digital marketing operations for NHL franchises and held roles at tech companies including Salesforce, advising enterprise marketers on maximizing their martech capabilities. Jennifer formerly organized the Inbound Marketing Summit and holds a certificate in Digital Marketing Analytics from MIT Sloan School of Management.

The latest version of SAFe from Scaled Agile Inc.

Scaled Agile Inc. has released an updated version of its SAFe 5.0 for Lean Enterprises, a framework for building a “lean-agile” mindset across business functions, including development, compliance, marketing, IT operations and others.

Why we care

The latest release aims to help companies implement agile processes across the entire enterprise, putting the customer at the center — already a key theme for marketing organizations focused on customer experience.

“SAFe 5.0 helps organizations address technical and agility deficiencies, as well as strategic and tactical incompatibilities between the business and IT, to master the core competencies necessary to achieve true business agility,” said Dean Leffingwell, SAI co-founder and chief methodologist.

SAFe 5.0 includes two new competencies — Organizational Agility and Continuous Learning Culture — with an aim of helping organizations integrate across departmental functions and provide a roadmap for ongoing improvement.

More on the news

  • The framework also includes guidance to help determine a business’s current state of agility.
  • The initial version of SAFe launched in 2011 with a small group of early adopters who implemented the agile framework.
  • Scaled Agile claims 70% of Fortune 100 companies in the U.S. have SAFe-trained professionals on staff.

About The Author

Amy Gesenhues is a senior editor for Third Door Media, covering the latest news and updates for Marketing Land, Search Engine Land and MarTech Today. From 2009 to 2012, she was an award-winning syndicated columnist for a number of daily newspapers from New York to Texas. With more than ten years of marketing management experience, she has contributed to a variety of traditional and online publications, including MarketingProfs, SoftwareCEO, and Sales and Marketing Management Magazine. Read more of Amy’s articles.



Why don’t we have any user interfaces in vertical format? Thoughts on applying vertical type into digital products.

Jihoon Suh

I’ve always found side projects to be fun, because they enable me to think outside the box about an existing solution, step into a field I’m unfamiliar with, and most importantly delve into design with no external constraints. After all, side projects are all self-propelled (or jointly propelled, depending on the axis of collaboration) with very little tangible outcomes expected. So its very important to have motive behind on why you are doing a side project. It could be to learn as simple as wanting to learn a new tool, improve on existing tools, dive into a problem that you are passionate about, or to spice up your portfolio.

Since working at Facebook, I found it very difficult to allocate significant amount of time on side projects. I often found myself too lazy to open my laptop after coming home from office. Yet I felt like I needed a dedicated time to keep learning about different aspects of design.

This coincided with my growing interest in typography.

My interest in typography grew due to two factors. One was periodical updates on a crowd-sourced type project called Baram Font by Yong Jae Lee. As Yongjae shared his progress of developing a vertical-native Korean font Baram, I began to realize the level of complexity and constraints that type designers may go through. Unlike latin characters, Korean alphabets morph their size and proportions when combined into characters. Due to this, a typical Korean font would require 3350 characters to be drawn, which includes 2350 Korean type variations, 1000 alphanumerical and special characters. Vertical text format is an antiquated text directionality in East Asia. I found the development of typeset which is designed to be used for vertical typography to be very intriguing.

Baram Font Family by Yongjae Lee

As a complete noob in graphic design, I bought books on Korean typography to study them. Then used my spare time to do mini projects. I made my Korean resume in vertically format and designed small posters using this font.

Read more about them here

Korean typography books on punctuations, mixed characters, printing techniques, and type designs

Another was my daily work in digital product design. Typography is one of the most important aspect of digital product design, because texts are main element of digital product. It is found everywhere — from button, labels, navigations, to contents. The information architecture and layout of digital product can depended on the density of text. Designers often struggle in balancing aesthetics and functionality(legibility) of type in its medium.

“Why don’t we have any user interfaces in vertical format?”

Then I asked myself, why don’t we have any user interfaces in vertical format? Indeed, this was a very stupid question. Vertical text is supported by only certain East Asian characters. And its popularity and presence is diminishing, due to lack of compatibility with alpha-numeric characters in digital medium. From functional aspect, text and their directionalities should deliver optimum readablity. We no longer read or write in vertically in daily bases, therefore vertical texts have significantly low legibility and readability. Directionality of text is an acquired preference, as we see that different cultures/regions adopt different directionality (eg, Arabic is written in right to left — RTL).

Textual Directionality difference between horizontal and vertical

Tracing the history behind wide adoption of horizontal texts, I noticed that the Western influences in globalization has been a critical factor. Western innovation in communication tools such as printing, typewriter, internet, computers, and mobile phone has been made horizontal-native. This naturally led to global adoption of horizontal text.

Human beings do not live in the objective world alone, nor alone in the world of social activity as ordinarily understood, but are very much at the mercy of the particular language which has become the medium of expression for their society. — Edward Sapir, American linguist

Interestingly enough, the directionality of language is so fundamental to humans, in ways people consume textual content and distribute them. The directionality determines various ways that people communicate; including but not limited to iconography, form factor of medium, and layout of information.

Yet I felt an urge to try out if current digital medium could showcase vertical texts. This seemed quite an unfamiliar field left with uncertainties. I called my ex-intern buddy Yanlin to get feedback about this idea. Over video call, we talked if we can shape this stupid question into a side project. I was quite excited to pitch it. However, the scope of the project was quite hard to tackle, as we found it hard to define at the early stage of the project. We agreed that this project could branch into many tangible outputs. By defining this work as an exploration, we settled on the idea that we will be taking very scrappy approach to deliver a website that delivers a following thought-provoking message: can vertical type work in user interface? The question lied in understanding when and where vertical type can be appropriate. This allowed us to set the tone of our project and stay away from other issues such as inherent text readability, human input device form factors, and lack of character compatibility.

Okay, so… what is Vertically Works?

Vertically Works is a design exploration on east-asian typography in digital user interface, jointly lead with yanlinma. The project is our attempt at taking vertical text and see if they could work in digital products. We hope to understand if there are cases when vertical user interface could be appropriate in websites, applications, or even larger operating systems.

It has been a little over a year since we had our very first video call about this project. Part of the reason that this project took a long time was that as full-time designers, it was hard to coordinate time to sync and allocate time to work on this project.

Hangout Calls with yanlinma: Seattle — San Francisco

Most of our conversation took place at bi-weekly calls. We set priorities and planned monthly milestones for how to accomplish the project. Following are key priorities that Yanlin and I divided up to efficiently deliver and track progress.

  • Project brief write-up
  • Translation (Mandarin/Japanese/Korean)
  • Vertical App mocks (several applications)
  • Logo/Bradning
  • Documentation
  • Website Design
  • Signup Website Implementation
  • Final Website Implementation
  • Medium Blog write-up

We planned to create a website that would house where we showcase our attempts at questioning the appropriateness of vertical text in user interfaces. It would essentially include our problem statement in different languages, and also include design works and tools to help others rethinking digital user interface in vertical direction.

The branding of vertically works is very literal. As the name implies, Vertically Works means two things. First, we want to see if vertical text in user interface works. Second, we want to gather various works (or attempts) of adopting vertical typography into vernacular digital products.

We needed a symbol-based logo for Vertically Works, so that it could reinforce vertical aspect, and also so that it could work in non-english settings. The set of vertical writing in Korean, Mandarin, and Japanese were put together. The Logo seemed quite blend as descending blocks. We added the iconic hollow dot replaces period in vertical writing. The logo embraced this as a supplementary graphical element to reinforce vertical setting.

Design process of the logo

Logo variations

To use this logo, we’ve also created a signup webpage to get collect people’s interest in this project. The detail of the project was obscured so the signup just communicated that this project is related to vertical text typography.

The signup page. Note that we intended to launch this in Spring of 2019 ??‍♂️

We wanted to use vertically.works showcase our journey toward answering questions about vertical user interface. As a first step toward this, we decided to adopt vertical directionality for vertically.works’ desktop layout, with a horizontal scroll from right to left.

Website blueprint

The website vertically.works is divided into 4 parts. Introduction, Redesigns, Tools & Documentation, and Teams. In this order, we want to walkthrough our thought process with visitors about the vertically works. We started with a question that became an idea for project. We then tried couple redesigns of existing digital products to understand vertical UI constraints and its validity. We used learnings from this to capture guides and provide tools for designing vertical UI.

Here are some of the redesigns to explore compatibility of vertical text in mobile user interface for existing apps. The application are randomly selected, and they are not meant to be taken literally. Rather they are meant to spark discussions around what works and what doesn’t.

After speaking with various people about this project, we quickly realized that there are so many ways to extend this project. This is only a beginning of Vertically Works.

We believe in collective intelligence.

With Vertically Works, we would like to encourage and empower more people try designing and thinking about vertical user interfaces. The more we try, the more we will learn about them. As mentioned earlier, dual-axis change in directionality of textual content is truly a radical change in user experience, regardless of its medium. Couple of scopes that we want to further explore are followings. These are a big, big areas that would be super interesting to tackle, and are in random order.

  • N-shaped pattern for vertical text reading (akin to Z-shaped pattern)
  • Design tools to build vertical UI
  • Acquired learnability of vertical type
  • Feasibility of vertical UI in spatial environment
  • Human input devices for vertical typing
  • hardware form factor vertical-native operating system
  • Definition of vertical-native applications
  • Design Framework for converting from horizontal to vertical
  • Animations and transitional elements in vertical UI

As a first step toward this, we recently had a new member, Jackie Chui, join the team. Jackie has been thinking about building design tools to make vertical UI design accessible and efficient. His first project in the team was building a Figma plugin to convert regular horizontal text box into a vertical one.

For those that found this interesting, and want to be part of our team, please visit vertically.works or @vertically.works and contact us! Like I said, this is just a beginning. We are always looking for team members to collectively design and explore ideas.

Wow, there it is. If you actually managed to read all of this, thank you so much. For next round, I’d like to expand on the aforementioned ideas to elaborate on how each ideas can benefit Vertically Works.


You have worked hard to create that perfect UI.

Don’t let spelling mistaks ruin it.

Try SPELLL for free

Used by 1700 designers from these top companies and more

Get SPELLL & join them

Give Figma spelling and grammar checking superpowers

With a single click, you can make your Figma file free of mistakes by

  • Fixing every instnce of a typo
  • Ignoring every instnce of a typo
  • Adding words such as your brand name (SPELLL) to your dictionary so they won’t get reported as typos

Get SPELLL & live that typo free life

change instnce to

2 occurences

Supports 43 languages from around the world

Asturian, Belarusian, Breton, Catalan, Catalan (Valencian), Chinese, Danish, Dutch, English, English (Australian), English (Canadian), English (GB), English (New Zealand), English (South African), English (US), Esperanto, French, Galician, German, German (Austria), German (Germany), German (Swiss), Greek, Italian, Japanese, Khmer, Persian, Polish, Portuguese, Portuguese (Angola preAO), Portuguese (Brazil), Portuguese (Moçambique preAO), Portuguese (Portugal), Romanian, Russian, Simple German, Slovak, Slovenian, Spanish, Swedish, Tagalog, Tamil & Ukrainian.

Get SPELLL & use it with your language


  • Unlimited usage for 7 days
  • After trial limited to Figma files with 100 text layers or less


  • 7-day money back guarantee
  • Unlimited usage
  • License for 1 user
  • Fast and premium support

One time payment

Pay once and use it forever

  • 14-day money back guarantee
  • Unlimited usage
  • License for 1 user
  • Fast and premium support


  • 14-day money back guarantee
  • Unlimited usage
  • License for 1 user
  • Fast and premium support

What is that? You use Sketch/XD/Studio?

I am working on bringing SPELLL to these design tools too! Join the waitlist and get early access.


(Image credit: Sony)

However you feel about it, Sony’s not-particularly-new logo for the PlayStation 5 has definitely got people talking. Revealed this week at CES 2020, the new logo (above) is remarkable for its similarity to the PS4 logo, and indeed the PS3 logo – prompting some savage but hilarious reactions from the design community. 

Obviously it’s never going to make it into anyone’s list of best logos, and somehow we doubt that Sony even cares. It’s almost as if the designer simply opened up SONY_PS4_LOGO_FINAL.AI, deleted the ‘4’, typed in a ‘5’ and saved it as SONY_PS5_LOGO_FINAL.AI before going home early. And fair play to them.

That explanation’s just not cutting it for some people, though. And while we’re with Occam and his trusty razor when it comes to divining the process behind the PS5 logo, we can take a certain delight in this particular theory of how it came about:

Posted on Reddit by user DeBeard – we’re not entirely sure if it’s their work or not – this neat little animation surmises that the ‘5’ in the PS 5 logo was created by copying, pasting, rotating and flipping the ‘P’. And then drawing out an arm on the top, because that’s definitely a lot less effort than hitting Backspace and pressing ‘5’.

That said, though, we’ve all occasionally found ourselves in a position where, due to lack of the right fonts, we’ve had to cobble together the odd glyph from bits of other characters, right? Right. And just maybe Sony’s been a little careless with its in-house assets, and someone really did have to make a ‘5’ out of the ‘P’. These things happen!

Sorry, but no. Finding ourselves with just a little too much time on our hands this morning, we decided to try it for ourselves. And while it kind of fits, on closer inspection you can see that no, it doesn’t quite fit enough. Take a look:

It's good but it's not right

It’s good but it’s not right (Image credit: Sony/Jim McCauley)

The top logo is one made by flipping the ‘P’ and drawing out an arm to turn it into a 5. The middle logo is the actual logo, with slightly mode pleasing curves around the midsection. And at the bottom we’ve overlaid the two; the grey areas are where things don’t quite match.

So there you have it. Didn’t happen. Sorry about that. To make up for it, here’s a lovely tweet we found where someone remade the PS5 logo in the style of previous PlayStation designs.

PS5 logo with previous designs pic.twitter.com/ZsbHgvJh2PJanuary 7, 2020

Related articles:


We typically start the month with a roundup of new tools and resources for designers, but with the start of a new year (and new decade), we thought a roundup of things to help you get more organized would be appropriate.

Some of these tools have been around for a while with features you might not be using. Other tools are on the new side and offer great functionality. How many of these tools are part of your kit? Which ones will you resolve to use this year?

Here’s what you need to get organized this month and start 2020 off right.


Dropbox is one tool that’s hard to live without. Not only can you use it to manage files and share, you can also use it to run presentations directly with Zoom conferencing or in Slack. Free plans are enough to get started and upgraded plans provide greater storage capability for individuals or teams.

Working from multiple locations with desktop sync and sharing client files are features that make this tool something I use every day.

Feature you need to be using: Shared link expiration dates. When you share files via link, set an expiration date to ensure files aren’t hanging out with access indefinitely.



Slack is probably a tool that you are already using, but are you making the most of it? Channels, hashtags, and integrations are the key to ensuring that Slack works for you in the way you need it. Take the time to set these up for an efficient, and organized, workflow across multiple teams.

Feature you need to be using: Sync Slack and your Google Calendar for real-time away statuses that work for you.

Cloud Libraries

We all work from a variety of locations—home, work, on desktops and laptops—so cloud-based libraries are a must. Save common files in a location that you can access from anywhere.

Feature you need to be using: Adobe Creative Cloud comes with a place to save libraries, but you can save and connect library files from any cloud-based tool.


Trello is a free organization and collaboration tool for just about any project. Think of it as a giant project checklist that allows you (or other team members) to keep an eye on how anything from a website build to planning a trip. It works cross devices and isn’t hard to figure out.

Feature you need to be using: Workflow automatons with due date commands and rule-based triggers to make tedious processes happen on their own.

Google Keep

Google Keep is the notetaking app you always wanted. Take notes from any device—sync across all devices—and share or keep notes to yourself. You can take notes by typing, with photos or audio (and it will transcribe messages for you). The best part is this notes app is free and pretty much makes anything else you are using obsolete.

Feature you need to be using: Location, and time-based reminders help keep you on task just when you need it.


Grammarly saves time and effort by checking your messages, everything from documents to website content to emails or social media posts, as you type. Use it to avoid embarrassing mistakes in your writing.

Feature you need to be using: Emojis help you track the tone of your message so that it’s on point and audience-appropriate.


ClickUp takes all your other apps and merges them into a single location and dashboard for easy organization. You can use it to manage your own workspace (free) or collaborate with teams (paid plan). There are multiple views—I’m a big fan of the list option—and templates help jumpstart using the tool.

Feature you need to be using: Use the messages option to create tasks or comments. Boom!

Filing System

Nothing beats a solid filing system. The key benefit of a system is that you store files and folders in the same way every time, making it easier to find things later.

I keep folders first by year. Within annual folders are folders by client name. Then by project name. When projects are complete, I end up with two folders: WORKING and FINAL. Use the same format for naming files. (I use Client Name-Project-Year.)

Feature you need to be using: Date project files. Relying on “date modified” settings isn’t enough if you resave an old file by mistake.


Invoicely makes it easy to work as a freelance designer. The platform is made for sending invoices, managing clients, and allows you to accept online payments. It’s secure and offers a free plan (as well as a paid option).

Feature you need to be using: If you are trying to get organized, time tracking tools help you know just what an individual client costs. You can enter time, expense per client, and mileage so you can get a realistic picture of revenue by project.


HelloSign is for anyone dealing with documents that need signatures. Send and sign online with a platform that’s secure and easy for users to understand. Plus, you can sign items right from common tools such as Gmail or other G-Suite apps.

Feature you need to be using: Store all your signed documents in the interface so you can find them later. (HelloSign will also automatically send reminders if someone hasn’t signed a form.)

Traditional Planner Online Calendar

Pair a paper planner with your online calendar to keep track of tasks (paper planner as a checklist) as well as events and appointments (online calendar). Daily deadlines are best managed when you can jot them down and check them off throughout the day. Plus, that note is right in front of you to stay focused.

Feature you need to be using: Try a weekly paper planner, tear off sheets, or a dry erase board for task management that doesn’t seem overwhelming.


WeTransfer makes sending large files a lot easier. There’s nothing worse than a file getting lost in cyberspace because it’s too big for email. WeTransfer allows you to send and receive big files with just a click. (And you don’t have to have an account to download files.)

Feature you need to be using: Integrate WeTransfer with other tools such as Slack, Sketch or Chrome for direct sharing from wherever you are working.


JotForm is the ultimate tool for creating any type of online form, from simple surveys to signups to payment collection or image uploads. The service has free and paid plans, depending on usage and everything is customizable, so forms can be branded with ease.

Feature you need to be using: PDF Templates are ready-made forms for everything from a simple invoice to contracts or photo waivers. Start with a PDF and tweak as you need. Plus, you can set it up to be filled out digitally and returned to you. This is a huge timesaver, and you can save custom forms in your account to use over and over again.

Featured image via Unsplash.


The start of the new year is a time for big-picture thinking and deciding what our priorities are for the months ahead. The design and tech worlds are on the precipice of major ideological changes and we are facing fundamental questions about our place in the world, and the way we interact with each other. If 2019 was marked by upheaval and unexpected challenges, what does 2020 have in store and how do we prepare for the decade ahead of us?

We wanted some expert perspective on the landscape, so we reached out to a diverse set of creatives to hear what’s on their mind, what they’re excited about, and their take on the issues that feel most important right now. In many of the responses there is a clear focus on getting back to basics and “cutting through the noise,” as Emerline Ji of Creative Mornings put it. There is a palpable sense of urgency, and a push to do some soul-searching about our sense of accountability in a time of unpredictability and flux. Echoing this instinct for action is Rachel Newell of Rapt Studio who put her main priority as simply “show up and do the work.”

Read on to see how some plan to tackle the year’s big questions about purpose, sustainability, and how to balance creativity with compassion.


‘It won’t be enough for startups to just be the hot new kid on the block.’—Emily Heyward, Co-Founder and Chief Strategist at Red Antler

“There’s a massive shift taking place, with large companies looking to ‘behave like a startup’ and trying to find ways to drive true product innovation and protect themselves from being unseated by all the new players in their categories. With legacy brands looking to stay ahead by ‘disrupting themselves,’ it’s only going to get more competitive and it won’t be enough for startups to just be the hot new kid on the block. Startups are going to have to take even more risks to stand out and reinvent what it means to bring a brand into the world today. Ultimately, the winners will be the companies who find new ways to deliver true value to their consumers, and it will be less about the origin story.

“Related to this, I resolve in 2020 to remove ‘direct-to-consumer’ from my vocabulary. It’s an oversimplification and only captures a piece of what makes new businesses succeed.”

‘Let’s use our creative vision to imagine what the world could be if we could solve some of our problems without creating new ones.’—Ari Kuschnir, founder and managing partner at m ss ng p eces

“We don’t have time for normal social evolution, so we have to do the interpersonal work to show up fully. We have to deal with collective challenges by bringing our creative gifts forward without projecting the worst aspects of ourselves into our creations. Storytelling has been a technology that enables us to imagine new and better versions of the world. Let’s not default to dystopian/utopian narratives or nostalgia, but use our creative vision to imagine what the world could be if we could solve some of our problems without creating new ones. That’s what 2020 is for me–a time to up-level your creative game and create stories that activate new collective possibilities.”

‘The need for grounding and guidance has never been more vital for creativity.’—Rachel Newell, Head of Strategy at Rapt Studio

“In a polarized, fast-paced, and unpredictable world, the need for grounding and guidance has never been more vital for creativity. A sustainable 2020 will require us to not only show up for work, but to ‘show up and do the work’ – on ourselves, and with our clients.

“In 2019 Rapt Studio defined a program for ‘psychological safety’ in our workplace, and we’ve extended this framework to client relationships. 2020 will see a focus on non-linear thinking, helping people to work smarter and embrace the ‘work in progress.’”

‘It’s time to return to the foundational practices of storytelling.’—Emerline Ji, Senior Manager of Brand Communications at CreativeMornings

“It was refreshing to see a handful of companies roll out beautiful and thoughtful editorial content initiatives and campaigns in 2019. We’re seeing more businesses harness the power of generous content to highlight the creativity and genius of their people.

“In 2020, I’m excited to see content producers realize it’s time to return to the foundational practices of storytelling in order to effectively cut through the noise. What do you stand for? What is your unique point of view? Is there a narrative arc? Do the pieces fit? At CreativeMornings, we’re always asking these questions to better serve and celebrate our community.

“This is a reminder to myself and others like me: You are a storyteller first. Never stop learning and experimenting. Protect your inventive spirit.”

‘Reimagining futures and establishing self-reliance.’—Sam Valenti, founder of Ghostly International

“I think 2020 and the decade ahead is going to be watershed for artists reimagining their futures and establishing self-reliance with smart partnerships and nurtured communities.”

‘A more equitable path forward.’Yancey Strickler, cofounder & former CEO of Kickstarter, author of This Could Be Our Future: A Manifesto of a More Generous World

“2020 will be the year that ‘values’ (humanities word) begin to push back against the dominance of ‘value’ (economics word). Over the last half-century we’ve assumed that the right answer to any decision is whichever option makes the most money. This has made people on top cash-rich and society at large values-poor. But this is the year the tide breaks against the dominance of money and we begin to see a fuller spectrum of value enter the consciousness for the first time: the climate, social cohesiveness, loyalty, purpose. New kinds of decisions based on new kinds of metrics will be a growing normand a more equitable path forward.”

‘Are there ways in which the democratization of design appreciation can lead to the (responsible) democratization of design itself?’—Monica Khemsurov, cofounder of Sight Unseen

“What’s going on in the world right now feels so overwhelmingly negative and out of control, and selfishness, greed, and inequality have a lot to do with it. Amidst all of these crises, materialism and luxury feel less relevant at best, and problematic at worst. I’m curious to see if that has any effect on the design industry this year. My hunch is that it won’t, since much of the industry caters to the wealthy, and they tend to live in a bubble. Since people are so anxious right now, they need the comfort of beauty and creativity more than ever. But at the same time I feel like we need to be actively involved in enabling change. I’m not sure yet what that means in practical terms.

“Second, I’m wondering if the generic ‘look’ that so many brands have adopted will continue in full force. So many brands and Instagrams look the same right now, and I’m very curious about how that particular pendulum will swing.

“In the next year and in the next decade I’m looking forward to seeing what happens as the trend towards people being more excited about the world of design in general continues to evolve. What if people start spending less on fleeting fashion items and more on sustainably produced objects and furniture they can live with until 2030 and beyond? Are there ways in which the democratization of design appreciation can lead to the (responsible) democratization of design itself? I hope so!”


Typography is getting more and more varied these days, as new trends and styles are added regularly while the basic roman and italic types remain all the rage. The abundance of techniques has become so immense, that anyone can try their hand in ‘classic’ or experimental typography.

Alternative or experimental typography denies the font’s primary role and isn’t necessarily about the type designer’s work. Instead, it welcomes illustrators, motion designers, digital artists, and applied arts masters to show off themselves via a custom font or an alphabet — without sticking to the rules of type geometry if they don’t feel like that.

Such sets of letters (which is a more exact definition) can be used just like display typefaces: to build up a strong visual effect for the printed production, branding, logotypes, apparel, and stationery design. Even stand-alone letters are incredibly engaging!

Finally, experimental typography is a great way to enrich your creative portfolio. Depending on your specialization, you can design an animated font, SVG font — or discover 3D typography.

3D design and 3D fonts are a huge trend for 2020 — so if you are good at Cinema 4D, it’s something you should give a go to. By creating extraordinary shapes and textures, you can create stunning compositions and build up entire scenes. Alongside casual 3D modeling, there are also letters made of wood, metal, and even foodstuff, which can take you to the next level of artistry.

Such types and sets of letters can help an illustrator apply the maximum of creativity and artistic skills. All in all, there are many drawing techniques to go with typography. Low poly art, vector fonts — you can even end up creating your own visual language similar to Matahari.

Experimental typography is not only for artistic entertainment. In October a group of social activists from Russia presented Dobroshrift.

This is a Cyrillic typeface drawn by children with cerebral palsy, and every letter has behind it a story of a precise child coming through this disorder. The Font of Kindness is also available in the Latin alphabet

Kinetic Typography & Animated Fonts

Love to a fine cycle in the style of Markus Magnusson is something everyone bears inside. And passion for GIFs too. So a motion masterly combines with a type can both boost your portfolio and help you do something new in After Effects. Shall it be an animated font or a piece of kinetic typography (which is a great technique to capture attention) — you decide!

9 Ridiculously Good Examples of Kinetic Typography

by Pavel Paratov, Luis Miguel Torres

by Jeroen Krielaars, MARÍA JOSÉ TORRERO HEREDIA, Animography


Hello Jane, tell me a bit about yourself.

I’m the Head of UX Research for Zapier, which is an all-remote company that’s dedicated to helping anyone and everyone offload the repetitive parts of their work. You’ll sometimes hear this referred to as workflow automation or app integration or a number of other terms, but the way I think about it is that we help humans spend time on the things that humans do best, and let computers take care of the rest.

My main focus is building out the UX Research program at Zapier and getting my team the resources they need to keep doing stellar work, informing company strategy. I love the place we’re at right now, because we have a huge amount of talent and capability, and we’re rapidly creating infrastructure to democratize research internally and start pushing our limits as a research team when it comes to producing foundational work.

The reason I got into management is because I’m driven by solving problems that are too big for a single person to take on themselves, and I design my teams with that in mind. Each person on the team might be working on a different research project, but they all ladder up to a single, overarching goal that will help Zapier make the best possible decisions as a business. My passion is building and leading teams that contribute meaningfully (and in measurable ways) to company success by helping align our product decisions directly to delivering value to our users.

Tell me about your design journey. How did it all start?

This was entirely accidental – I was actually going back to school to get my degree in Library Science. Hilariously, I managed to live in the Bay Area for four years without learning anything about design as a career option – I didn’t really know anything about tech, and I didn’t know that many people who worked in the tech industry. The ones I did know were people I knew through big art projects or weird street theatre, so we didn’t really talk about our work. I realize how strange that sounds to be writing in 2019, but it was 2007 – the Mission was still mostly full of hipsters, and I could still afford a studio apartment on a non-profit salary. I had no idea that there was a way for me to make a living that let me combine all the different aspects of my previous jobs that I liked best. My journey to design, and to research specifically, is really a story of some very lucky forks in the road in my career path.

So, back to how I got here: before I went to get my Master’s degree, I was working as an event planner, but prior to that, I spent several years working in libraries, starting in college, and continuing in my adult life. After four years of working a ton of nights and weekends as an event planner, I was ready for a change, and I really loved a lot of aspects of the library work I did, particularly book preservation and cataloging work.

However, as luck would have it, I went to the University of Michigan School of Information for my Master’s program, which has a lot of different specializations you can choose from, rather than just being a Library Science program. I started taking courses in Human-Computer Interaction and realized it brought together a lot of my disparate interests in a single field. If you look at my career history (librarian, event planner, design researcher), the thing all those jobs have in common is the organization of information in the service of human goals. There are skills from both event planning and librarianship that I use every single day of my life, like putting strangers at ease or categorizing information.

I also didn’t initially plan to go into research. My first job in design was as an interaction designer. I started as an intern at JSTOR, and they kept me on for the rest of my time in Michigan. Once I graduated, I was up for two roles at BitTorrent and had to make a choice between them: either an IC interaction designer or the UX research lead. That choice made me reflect on what I did (and didn’t) like about the different roles, and made me realize that research is where my heart has always been – I’m at my best when I’m operating in a complex problem space and helping others translate that knowledge into implementation. Once we get into solutions, I’m just not the best person for the job. And I like to have at least a shot at being the best person for the job.

Before being a Head of UX Research at Zapier, you were the Design Research Manager for Growth at Dropbox. Tell me about that.

While I spent most of my years at Dropbox managing the Growth Research team, I actually started there as an IC, which turned out to be invaluable for building cachet with product teams. There are many ways to do this without actually conducting the research, but it was early in my career, and I think it was the best way for me personally at the time. Being able to speak with authority that derived directly from experience in the field with our users was crucial to building bridges, and it laid the foundation for extending that cachet to my team once I moved into management.

I actually started there [Dropbox] as an IC, which turned out to be invaluable for building cachet with product teams.

My main focus was to create a working model for the team that enabled us to work on high-level, strategic questions that were relevant for all of the Growth while still ensuring that individual product teams had the access they needed to research. We went through multiple iterations of this before getting to a place where it was starting to work the way I envisioned, but I am pleased that throughout it all, we managed to tackle high-impact, foundational questions that informed a lot of future work.

The project I always love talking about is one that blended IC and management work for me. A couple of years into my time at Dropbox, Growth went through re-alignment of our strategy that entailed looking at crucial points in the user journey and developing a more complete understanding of them, with an eye towards redefining our key metrics based on our users’ goals. I led my team in conducting four separate streams of work that rolled up into an overall picture of our user journey and the key moments in it. We were able to uncover insights that immediately unblocked teams but also provided a well for them to draw from for years afterward. Even as I was leaving Dropbox, teams were running experiments based on ideas that they generated from that research from two years earlier.

There are a lot of reasons to move on to the next thing, but the one that always gets me is when I feel like I’m no longer solving new problems. Towards the end of my time at Dropbox, I felt like I was solving the same problems over and over again, without really developing new skills. And right around that time, Zapier reached out to me with this absolutely incredible opportunity to come to lead the UX research team at a time when it was producing amazing work but needed someone to come in and help the team turn its huge amount of potential into a well-oiled machine.

You’re now leading the UX Research team at Zapier. What kind of challenges do you face with your team?

At this moment, I’m actually managing both UX Research and Content Design, although we’re still deciding whether that’s the right structure in the long-term. I’ve got two IC researchers, as well as a Content Designer, all of whom are the kind of high-performing, exceptionally talented people that you have to be wildly lucky to inherit as your team. I had backfill for another researcher available when I started but made the argument to our Chief Product Officer (my boss) that we needed to hire an Operations person before we scale the team any further. To that end, I just hired an incredible Research Operations Program Manager to come in and develop infrastructure and help us scale our work.

In 2020, I’ll be hiring another 2-3 IC researchers, another 2-3 writers and content designers, and another research operations person. In short, we’ll be more than doubling in size over the course of the year. That’s why I was so adamant about bringing on Research Operations now – it’s the only way to scale that rapidly while having an ongoing impact at the level of company strategy.

At the moment, a lot of my day is spent building bridges between UX Research and other functions, as well as ensuring that UX Research and Content Design are focusing on the highest-impact projects. One of the key things I was brought on at Zapier to do was to help bring together all the different insights functions from across the organization so that we’re leveraging each others’ work and collaborating on the big questions. As part of that, we’re working on rolling out something called “Insights Pods,” which are a malleable grouping that consists of, at minimum, a Decision Scientist, a Data Engineer, and a UX Researcher, with other functions like Support Specialists, Content Designers, and Product Marketers joining as the work requires. These pods work together on key company initiatives. In an ideal world, these pods will ultimately be creating a shared insights roadmap aligned directly to the initiative they’re working on, with all the different insights functions contributing to each item on the roadmap. Of course, it’s easy to talk about ideal worlds when we haven’t rolled it out yet, so we’ll see what happens. ?

What projects are you currently working on?

We just released Zap Sharing in beta to our paid users. This lets users share their Zaps with others via a link. I love this project because it gets to the core of one of our company values, build the robot, don’t be the robot, and takes it further by letting users leverage work that’s already been done. So now, users only need to build the robot once, and everyone they share it with can benefit from it.

I love this project because it gets to the core of one of our company values, build the robot, don’t be the robot.

The next step in this is public Zap Sharing, which will let users make their Zaps findable not just through Zapier, but via a search engine, as well. We’ll be releasing that in 2020, and I’m really excited for what it opens up for our users, and for the new channels into our product, it creates for potential users.

You’re the expert in UX design research. How do you conduct user experience research for maximum value?

Something I say to my team with probably annoying frequency is that the purpose of a research team isn’t to conduct research, it’s to have an opinion. More specifically, it’s to have an opinion that helps the company make better decisions than it otherwise would. Over the years, I’ve shifted my teams entirely off conducting evaluative work. We focus solely on producing research that teams can draw from for years to come — like the insights on the Dropbox user journey that I mentioned earlier. My team at Zapier embeds directly on our key company initiatives, which span multiple EPD zones and product teams. Our work all rolls up directly to Zapier’s long-term goals, which is how my team can ensure we’re producing the most value for the company.

Research that in-house teams conduct should have a lifespan of years, not weeks or months. That means focusing on the complex, foundational questions. If you create that well of insights, teams will be able to return to it over and over again. Contrast that with evaluative work, which takes almost the same amount of lead time, but has a lifespan of a few weeks.

This isn’t to say that evaluative work isn’t a necessary part of product development. Product teams should be conducting as much of their own evaluative and even more generative work as possible. Not only does that ultimately speed up the product development process, since teams aren’t sitting there waiting for research to come back with usability findings, but it also connects teams more deeply and directly to their users, meaning they’ll have a stronger sense for what users need and want, so they’ll make better design and product decisions as they go.

This shift to only conducting foundational research means two things:

  1. Research Operations is a must-have function because we need to democratize and scale the ability to conduct research to everyone at Zapier.
  2. We need to develop some way of bringing more junior folks in and training them up.

I’ve already spoken about Research Operations and how important it is, but let’s spend a little time on that second point. Research is a relatively small discipline, and it’s important to me that as I’m building a senior team, I’m finding ways to create opportunities for people with little or no research experience to get into the field. The normal way that would happen is to hire junior researchers that partner with senior researchers in one of our product zones and have them take on the less complex, more evaluative projects. But since my team doesn’t take on any projects that aren’t complex, we need to find new ways to do this.

We need to develop some way of bringing more junior folks in and training them up.

I’m going to be piloting an approach to this in 2020, where we have a centralized junior researcher on our team. They’ll embed with a product team for the duration of a project, and then move to another team that needs some specialized research support. That way, they’ll get hands-on experience while still be part of a larger team that provides mentorship and helps them build their craft skills. Ideally, this will eventually turn into a kind of in-house apprenticeship program, where we’re providing opportunities to the next generations of researchers and creating a strong internal pipeline for our foundational research needs.

The other thing I’ve been thinking about a lot along these lines is building a team not based on methodological expertise or even the level of the researcher, but instead thinking about the jobs that research teams do that aren’t conducting research. I’m pretty sure there are researchers out there who would glare at me for saying this, but the least important thing that in-house researchers do is actually conduct the research. What we do is help companies translate the information we get from research into better decisions, and we do that in a lot of ways: through socialization, facilitation, thought elevation, subject matter expertise, and user connection.

There are a million different ways to get the same information in the course of a research project. I’m not concerned with finding one researcher who’s expert in one methodology, and another researcher who’s expert in a different one. I’m concerned with finding people who know how to get the information the company needs, and ensuring that the company is able to use it effectively.

As I’m building out my team, rather than saying, “I think we need a mid-level researcher here,” or “We could use a diary study expert here,” instead I’m working with my existing team and Zapier as a whole to understand what strengths we already have, and what additional ones we want to bring on board. To me, being able to conduct high-quality research is the baseline we start from. I take that as a given for anyone I’d hire. So I’m interested in how researchers ensure that their work has an impact, how they help the company translate it from ideas into action, and how they give it longevity.

I can teach anyone to talk to humans. What I want from my team is to be told how to think about them. How they get there is up to them. It’s what they do with it afterward that I care about.

How do you conduct your research and user testing?

Right now, we’re an all-remote research program, which has fit well with our company’s strengths as an all-remote workforce and has enabled us to move incredibly quickly and do a lot with very few resources. As we’re evolving the discipline at Zapier, we’re going to be spending more time in the field. There are certain things you get from fieldwork not just from a research perspective, but from a product development perspective, that is incredibly valuable, and that I want to make sure we’re equipped to do when it makes sense.

One of the most wonderful things about joining Zapier has been getting a crash course in remote synthesis (and all other things remote). My team is genuinely exceptional at this, and they have been patiently bringing me along. So much of my experience as a researcher has been with in-person synthesis, so this has been a chance to get outside of my comfort zone, push my own abilities, and learn a new skill.

A lot of apps become bloated with features in time. What would you recommend such products if they want to simplify/improve their UX?

“Simplify” is such a loaded word these days because it can get conflated with “go minimal” or “remove things.” Simplifying the UX isn’t a good end goal, in the same way, that efficiency isn’t a good end goal; they’re tools for achieving an end goal. But there are definitely situations in which the purpose of the product has been convoluted, or the UI is trying to do too many things at once. Once a product has found themselves in that situation, I start with a single question: How is this helping the user achieve their goal?

Simplifying the UX isn’t a good end goal, in the same way, that efficiency isn’t a good end goal; they’re tools for achieving an end goal.

I like this question for a few reasons: it forces you to have a very clear understanding of your users, the value your product is supposed to be providing, and whether or not the feature is contributing to it. You can ask it at multiple elevations, from the overall product down to each individual interaction. And asking it every time you’re considering a new feature or a product change can help you stay true to your original purpose and ensure that you’re making good on the promise you’ve made to your users, and avoid feature bloat that dilutes or even entirely removes the value you were originally provided to users.

It’s very easy to “value add” your way straight out of product-market fit, especially when you can make a case for a feature “adding value” to users. The question when you’re adding new features shouldn’t be, “Is this potentially useful to someone?” It should be, “How does this contribute to our users achieving their goals?” A lot of good products have gone sideways by trying to be too many things to too many people, while not ever actually nailing the one thing people wanted them to be in the first place. So you have to get crystal clear on what people need from you and ensure that everything new you put into your product is part of delivering on that original promise.

That doesn’t mean that you need to stick with a single job or a single product forever. But the way to innovate is to steadily add to your original values in ways that are adjacent to it. If you’re a note-taking app, the way to add adjacent value is to look at the steps in the workflow that come before and after the act of taking notes, and see if there’s an opportunity for you to support those, as well. It’s not to decide to also provide video conferencing because you noticed that people who take notes also have a lot of remote meetings.

If you’re a note-taking app, the way to add adjacent value is to look at the steps in the workflow that come before and after the act of taking notes, and see if there’s an opportunity for you to support those, as well.

Do you use any personal time management techniques like pomodoro?

I don’t use time management techniques per se, but I do keep a gloriously detailed, color-coded calendar that blocks off not just meetings, but project work, reading and writing time, and anything else that needs dedicated attention. This isn’t just for me, and it’s also for the rest of my company – any time that looks open on my calendar is genuinely open, so there’s no need to check with me first.

I’m also a pretty efficient context-switcher, so interruptions don’t actually bother me. This is great for my productivity, but it means I have to be mindful of how and when I’m messaging my team. They’re the ones doing the deep thinking work, and need to not be inundated with messages from me on a million different topics. Part of my job is to protect their time, both from others and from me, so while I make myself available for interruptions as much as possible, I work hard to avoid interrupting them.

Part of my job is to protect my team’s time, both from others and from me.

The only other tool I consistently use for staying on top of my time is actually just keeping multiple browser windows open, each one dedicated to a different context. I have three main windows: one for my basic work tabs (Google Calendar and Trello), one for personal (Gmail, GCal, and Mattermost, which my husband and I use for our household chat), and one for whatever things I need to read that day. When I’m working on a project in a heads-down way, I’ll pull out just the tabs for that project into a new window and minimize everything else.

How do you manage your projects?

I manage my personal projects with Trello since most of them don’t have formal outputs beyond documentation. I keep both running to-do lists on it as well as project-specific boards. Other than that, Coda is the tool I use most since it allows you to sub-divide your documents and keep all the relevant materials in a single, visible location that’s easy to navigate around. It’s particularly great for research projects since you can keep all of the related work in the same place – research plan, interview guide, recruiting screener, notes, findings, etc.

What excites you the most about design in 2020?

What a great question! There’s a lot I’m seeing in terms of the developing maturity of organizations’ understanding of what design can do. I’m particularly amped about the attention that Design and Research Operations are getting as disciplines, and am excited to see smaller and smaller companies starting to invest in them.

I’m also starting to see a growing desire on the part of designers and researchers for measuring their impact and really understanding the business side of things. The impact is my jam, so I’m very excited about this, especially having spent so much time working directly on growth. Design is about how we deliver value to users, but it’s important to do it in a way that aligns with the success of the business. I’m seeing a lot more about growth design out there in the world, which indicates to me that design is undergoing a healthy shift towards understanding and measuring how it contributes to business goals. While it’s not design-specific, this piece from The Margins is probably the best thing I’ve read on the topic in quite some time.

I’m also starting to see a growing desire on the part of designers and researchers for measuring their impact and really understanding the business side of things.

How do you educate yourself?

I try to read whatever’s getting passed around on Zapier’s design Slack channel at any given moment, to see what’s resonating with designers.

This is going to be a telling glimpse into my background in growth, but most of my daily and weekly reads aren’t design-related at all; they’re business-related. I’m hugely fond of Matt Levine’s Money Stuff, and I try to make it at least halfway through the Economist every week.

Most of my daily and weekly reads aren’t design-related at all; they’re business-related.

I read a ridiculous volume of management books, but the hands-down, best-ever book on management is Michael Lopp’s Managing Humans. This is the “if you only read one book on management, read this” recommendation I give everyone. I also love The Toyota Way, especially now that I’m working at a company whose product is automation – that’s one I think anyone working on integration or automation should learn from.

Erika Hall’s Just Enough Research and Donald Norman’s Design of Everyday Things are the ones I’d give to anyone. For deeper cuts, I like Christopher Alexander’s Timeless Way of Building and A Pattern Language. And The Responsible Communication Style Guide should be read by anyone who puts words in front of users, in any context.

Where is the best place for people to connect with you online?

I gave up on Twitter a couple of years ago, so the best place is LinkedIn. I’m also technically on Instagram, but it’s mostly pictures of soufflés I post as humblebrags.

I spent the summer after my senior year of high school canoeing in the Arctic Circle with a group of 5 other women. We were charged by grizzly bears and wound up getting airlifted out by the Royal Canadian Mounted Police.

What do you like to do when you’re not working?

My husband and I have a one-year-old daughter, and we’re working on introducing her to the things we love, like hiking and cooking. I’m also a voracious reader and something of a crossword puzzle freak.

Bonus Question: When was the last time you ate avocado, and how was it made?

This afternoon! I gave my daughter half an avocado as a snack, and shamelessly ate most of it after she crawled away to play.


Mozilla is a global community that is building an open and healthy internet. We do so by building products that improve internet life, giving people more privacy, security and control over the experiences they have online. We are also helping to grow the movement of people and organizations around the world committed to making the digital world healthier.

As we grow our ambitions for this work, we are seeking new members for the Mozilla Foundation Board of Directors. The Foundation’s programs focus on the movement building side of our work and complement the products and technology developed by Mozilla Corporation.

What is the role of a Mozilla board member?

I’ve written in the past about the role of the Board of Directors at Mozilla.

At Mozilla, our board members join more than just a board, they join the greater team and the whole movement for internet health. We invite our board members to build relationships with management, employees and volunteers. The conventional thinking is that these types of relationships make it hard for the Executive Director to do his or her job. I wrote in my previous post that “We feel differently”. This is still true today. We have open flows of information in multiple channels. Part of building the world we want is to have built transparency and shared understandings.

It’s worth noting that Mozilla is an unusual organization. We’re a technology powerhouse with broad internet openness and empowerment at its core. We feel like a product organization to those from the nonprofit world; we feel like a non-profit organization to those from the technology industry.

It’s important that our board members understand the full breadth of Mozilla’s mission. It’s important that Mozilla Foundation Board members understand why we build consumer products, why it happens in the subsidiary and why they cannot micro-manage this work. It is equally important that Mozilla Corporation Board members understand why we engage in the open internet activities of the Mozilla Foundation and why we seek to develop complementary programs and shared goals.

What are we looking for?

Last time we opened our call for board members, we created a visual role description. Below is an updated version reflecting the current needs for our Mozilla Foundation Board.

Here is the full job description: https://mzl.la/MoFoBoardJD

Here is a short explanation of how to read this visual:

  • In the vertical columns, we have the particular skills and expertise that we are looking for right now. We expect new board members to have at least one of these skills.
  • The horizontal lines speaks to things that every board member should have. For instance, to be a board member, you should have to have some cultural sense of Mozilla. They are a set of things that are important for every candidate. In addition, there is a set of things that are important for the board as a whole. For instance, international experience. The board makeup overall should cover these areas.
  • The horizontal lines will not change too much over time, whereas the vertical lines will change, depending on who joins the Board and who leaves.

Finding the right people who match these criteria and who have the skills we need takes time. We hope to have extensive discussions with a wide range of people. Board candidates will meet the existing board members, members of the management team, individual contributors and volunteers. We see this as a good way to get to know how someone thinks and works within the framework of the Mozilla mission. It also helps us feel comfortable including someone at this senior level of stewardship.

We want your suggestions

We are hoping to add three new members to the Mozilla Foundation Board of Directors over the next 18 months. If you have candidates that you believe would be good board members, send them to msurman@mozillafoundation.org. We will use real discretion with the names you send us.


The world of emerging design tools is incredibly exciting. Promising new programs are being launched in rapid succession, but there is one in particular that has slowly become the best choice for bigger and mostly distributed design teams.

Figma has the potential to support the entire design process end to end. First sketches, feedback collection, collaboration, ready-to-test prototypes, and developer handoff are well within its capabilities. Figma also simplifies the work of product managers and design leads, as well as all other stakeholders.

So, what sets Figma apart from the competition? There are several key aspects, but all of them can be traced back to the fact that Figma is web-based. This raised a lot of eyebrows when the tool was first released, but ultimately, it paved the way for Figma’s flexibility and unique functionalities.

Figma features include design, prototyping, collaboration, design systems, and plugins

What Is Fundamental to a Smooth Product Design Process?

While it’s true that each team and project may have a different workflow, there are functions and phases that are essential components of product design and management. Figma covers each.

  1. Accessibility and collaboration. No matter the tools a team uses, there needs to be a way to work together, present projects, gather feedback, and keep stakeholders in the loop. All of these features are available in Figma.
  2. Flexibility to iterate and produce deliverables. Figma is flexible and can be used as a collaborative whiteboard for sketching initial ideas and iterating everything from wireframes to high-fidelity mockups.
  3. Ability to build prototypes and test. Designing a mobile app? Build and publish interactive prototypes and test them directly on smartphones–all from Figma.
  4. Establishing a single source of truth. Figma is an excellent way to define product design systems that reside completely online. It’s a single source of truth by default. No more wondering, “Is this the latest version?”
  5. Smooth handoff. Just share a link with developers, and they’ll have access to all the information needed to implement the design–redlines, measurements, and graphic assets ready to be exported.
  6. Versatility to be customized and improved. Figma’s plugin API allows teams to write their own plugins or tap into the broader user community and extend the tool with new functionalities.

1. Accessibility and Collaboration

Get the Team on Board

Let’s take a closer look at how Figma contributes to the six key areas covered above. New users may want to create a free Figma account, set up a team, and watch get started with Figma. Once a team is created, invite team members and relevant stakeholders to join.

Figma tutorial

There are three access levels available. It’s fairly self-explanatory, but usually, edit rights are given to the design team and view rights to everyone else. This includes developers and other stakeholders who need to be able to follow the design process and provide feedback.

Figma prototype

Similar sharing options are available at the team level (shown above), project level, and file level. If access levels aren’t specified, they propagate from team to project and from project to files.

Since Figma is available in the browser, it doesn’t matter how someone is invited or what system they are on. As long as the device and browser they use meet the minimum requirements, they can jump right in with a link, and the interface changes according to whether they have edit or view rights.

Embedding a Project File

Figma project files can be embedded in third-party software. For example, a shared Dropbox Paper document may be used to present the current state of a project.

To embed project files, set the file visibility to Anyone with the link – can view, copy the embed code, and embed the project file in any third-party software that supports embed.ly.

Figma version control

Feedback and Revisions

Another key functionality for the product design process is the ability to distribute designs, collect feedback, and manage revisions. Anyone with a Figma link can see the latest version and comment directly on the point they want to give feedback about.

Figma animation

To tag team members, use the @ character, and the system will present a list of names to choose from. Doing so notifies team members, and once feedback has been processed, discussions can be closed by clicking on Resolve.

Figma app

To keep teams aligned, there is a nice integration that can publish discussion from Figma files in a designated Slack channel.

Real Real-time Collaboration

One of the more intriguing functionalities of Figma is what they call multiplayer. It allows multiple team members to open and work on a design file at the same time. Everyone working on the file is visible on the top right of the page, and their avatars are named and clickable.

It can take some time to appreciate what this means in practical terms. While it’s unlikely that designers will use multiplayer to simultaneously work on the same part of a file, it’s incredibly comforting to not have to worry about file version conflict, especially for bigger distributed teams.

Multiplayer comes in handy when presenting remotely because it makes it possible for everyone connected to the file to follow the viewport of the presenter. It also allows teams to bypass additional programs and use Figma as an online whiteboard (though specific solutions, like Miro, may ultimately be better suited for the job).

To this end, it is advisable to create custom components to replicate specific assets as virtual post-it notes or diagram elements.

2. The Flexibility to Iterate and Produce Deliverables

Once team members have been invited and initial sketches are underway, Figma can be used to iterate. The main reason that Figma was created is interface design—with whiteboarding finished, teams may move to flow maps and wireframes. Again, it’s smart to build component libraries that can be reused for these tasks. The official templates collection provides inspiration and design arrangements to start from.

3. The Ability to Build Prototypes and Test

It is quite easy to create interactive prototypes with Figma. When a Figma file is open with editing rights, it is possible to switch between design and prototyping mode. Once in prototype mode, it is possible to click on an element to make it interactive, whether from the stage or the layers sidebar.

Figma archetype

With an element selected in Prototype mode, a small circle appears on the side. When dragged, blue lines appear, and it can be dropped on the screen or state to show the result of the interaction.

Figma design

Common transitions are available in Figma, making it much easier to create high-fidelity prototypes. In fact, the Figma team recently released a smart animate and drag trigger feature. The smart animate aspect interpolates the motion of similar elements, while drag trigger is a new type of interaction. Both drastically improve the quality of interactive prototypes.

User Testing

Again, a link is all that’s needed to distribute Figma prototypes, even for user testing. Clicking on the small play icon on the top right launches the prototype, and a new URL is generated. It’s possible to copy the URL or use the blue Share Prototype button. Once users open the link, they are presented with an interactive prototype and able to leave comments if desired.

Testing on Mobile

Designs for mobile devices are presented with a mockup of an actual device surrounding the prototype. If more realism is needed to test interaction, it’s best to download the Figma Mirror app for device-specific testing.

4. Establishing a Single Source of Truth

Version History

Wouldn’t it be great if project files were always current and constantly backed up? This functionality is included in Figma by default. Each file is saved automatically while working on it, and Figma creates a new entry in the version history after 30 minutes of inactivity. A log of all automatically saved versions is created, and each version can be restored if needed.

Figma design tool

Of course, automatic versioning isn’t the only option for saving work. It is possible to save a version manually or edit a specific version in the version history.

Figma how to prototype

Design Systems and Component Libraries

Another aspect where Figma shines is how it lets designers create, organize, and distribute component libraries. Any file can be published as a library, and every color, text style, effect, grid, or component is available to be used in other Figma files.

Figma prototype

When changes are made to any element in a library, edits can be published and propagated to files that use those elements. Designers working on these files can then decide to accept edits or not.

The ability to decide how to distribute libraries, combined with the option to toggle the visibility or invisibility of library elements, makes the whole experience smooth and powerful.

Components and libraries can be nested to create sophisticated design systems where everything is versioned and up to date for everyone involved. And all components can be annotated with comments.

5. Smooth Handoff

Developer Handoff

With Figma, designers and developers don’t need a separate tool, like Zeplin, to manage handoff. They can simply open files and switch to Code mode in the right sidebar, even with view-only access.

Figma UI design

When Code mode is active, selecting an element on the stage will reveal all relevant information needed for implementation, and all other information regarding component position in relation to other elements is visible. As with similar solutions, the generated code is not meant to be entirely copy-pasted, but it’s helpful to have such easy access.

Figma UI elements

6. The Versatility to Be Customized and Improved