Snap Inc., the parent company of Snapchat, has announced the launch of a new video ad format: Extended Play Commercials. The new, mid-roll ad type maintains the 6-second non-skippable format – but offers advertisers the ability to extend the runtime beyond the 6-second mark for up to three minutes.

Similar to YouTube TrueView ads, with Extended Play Commercials, Snapchat users will have the option to either skip the ad after six seconds or continue to watch the full video commercial. The key difference is that Snapchat’s Extended Play Commercials will appear mid-roll, rather than pre-roll like YouTube.

Why we should care

The extended ad format offers advertisers an opportunity to deliver a 6-second teaser that can then roll into a longer length commercial for viewers who may be more engaged.

“Extended Play Commercials are a great option for online video and TV buyers. Heading into the holidays, this format is a powerful new way to reach our Gen Z and Millennial audience in Snapchat’s premium, brand safe Discover content,” said David Roter, VP of global agency partnerships at Snap.

The added option of a longer-form ad provides advertisers with more options and flexibility around the types of content served through the platform. For example, some brands may want to deliver more context or a richer story for users who choose to keep watching, as with YouTube’s TrueView ads.

More on the news

  • In September, Snapchat introduced a skippable ad format that can run up to three minutes long. Extended Play Commercials are similar but run as a continuation of the 6-second format rather than as a standalone 3-minute ad.
  • Extended Play Commercials are now available in closed beta, with whitelisted advertisers able to access them in Ads Manager.
  • Any Snapchat advertiser can request access to the beta test from their sales representative – or by reaching out via Ads Manager.

About The Author

Taylor Peterson is Third Door Media’s Deputy Editor, managing industry-leading coverage that informs and inspires marketers. Based in New York, Taylor brings marketing expertise grounded in creative production and agency advertising for global brands. Taylor’s editorial focus blends digital marketing and creative strategy with topics like campaign management, emerging formats, and display advertising.


Dreamforce, Salesforce’s annual user event, kicked off today, and with it, the launch of Salesforce 360 Truth: a platform that brings together the company’s family of products: Saleforce Data Manager, Salesforce Identity for Customers, a Privacy and Data Governance solution, and eventually, Customer 360 Audiences.

Patrick Stokes, EVP of platform, shared services at Salesforce, said the newly introduced Salesforce 360 Truth platform aims to “create more connected and continued experiences,” by bringing the full context of the customer into the experience.

Salesforce Data Manager. Salesforce began talking about their Data Manager solution last year, but is only now making it generally available to all users after keeping it in beta for a number of months. The Data Manager uses a canonical data model and universal Salesforce ID to connect customer data across other systems: “With a click-based user interface for app and data management, admins can easily establish trusted connections between data sources to prepare, match, reconcile and update the customer profile,” explains Salesforce.

Salesforce Identity for Customers. The identity solution has been available for some time, but is now being rolled into the Customer 360 Truth platform. The solution provides a single, authenticated login experience for a company’s websites, e-commerce storefronts, mobile apps and connected products — all while delivering identity analytics and insights on the users taking advantage of the single-login experience.

Privacy Data and Governance. The Customer 360 Truth platform also includes privacy measures, enabling companies to go field-by-field, defining how data will be used from one system to another. It also offers data classification labels within Salesforce so that users can see what types of data is available and what data customers have approved — as well as how best to interact with customers using the data.

The Privacy and Data Governance solution is designed to help customers comply with regulatory laws, including GDPR and CCPA, according to the Salesforce announcement.

Customer 360 Audiences. This solution is not yet ready to be added to the Customer 360 Truth platform, but will be introduced later in 2020 said Stokes. First mentioned by Salesforce in June, Customer 360 Audience aims to unify customer profiles across data. Using customer email addresses, first party IDs, as well as unknown data (website visits and device IDs) and powered by AI, the system creates customer segments that can predict things like “lifetime value” or “likelihood to churn” — allowing marketers to optimize the customer experience.

Why we should care. Marketers are striving to streamline their customer journey — an effort that arguably begins with the strength of a company’s CRM system. By pulling together data manager, identity, privacy governance, and soon, the Audience 360 solution, Salesforce is offering a treasure trove of capabilities all within its already well-established CRM platform.

About The Author

Amy Gesenhues is a senior editor for Third Door Media, covering the latest news and updates for Marketing Land, Search Engine Land and MarTech Today. From 2009 to 2012, she was an award-winning syndicated columnist for a number of daily newspapers from New York to Texas. With more than ten years of marketing management experience, she has contributed to a variety of traditional and online publications, including MarketingProfs, SoftwareCEO, and Sales and Marketing Management Magazine. Read more of Amy’s articles.


Make 2020 the year you unify, empower, and cross-train your marketing team. Send them to Search Marketing Expo – SMX West – February 19-20 in San Jose for an immersive and rewarding team-building experience.

The Search Engine Land experts have redesigned and reorganized the agenda into 3 lanes with no limits: SEO, SEM, and new content devoted to digital commerce marketing. Your team will divide and conquer 60 sessions (3x more sessions than last year!) loaded with actionable tactics and proven solutions that can take your company to a new level of success.

Teams that come to SMX learn about more than search… they learn about what it takes to win:

  • Executing successful marketing campaigns requires blending different skills, experiences, and backgrounds. Attending together will equip your team with a common vocabulary and vision of success. Connecting the dots between organic and paid search, content, and cross-channel attribution will become second nature.
  • Smart marketers never stop learning. Your team’s educational journey at SMX will help them connect what they learn to how they execute back at the office. They’ll expand their knowledge in a meaningful, productive way – instead of just learning new concepts in a vacuum.
  • Technology is the key to delivering delightful customer experiences. At SMX, your team will quickly and efficiently evaluate dozens of time-saving solutions in the Expo Hall and find the right technology for your company’s needs.
  • No one achieves anything on their own. Interacting with the search marketing community will boost team morale and connect them with smart, friendly, like-minded people who want to share advice, trade ideas, and discuss ways to overcome common marketing obstacles.

Send more, save more

There’s one more reason you should attend as a group… the more people you send, the more money you save:

  • 3-5 save 10%
  • 6-11 save 15%
  • 12 or more save 20%

Ready to sign up? Complete this form and email it to Register now for access to all sessions, keynotes, networking, and amenities — delicious breakfasts, hot lunches, snacks all day, free WiFi, and more.

Psst… Add a full-day, post-conference workshop to your team’s itinerary for maximum learning in an intimate environment.

About The Author

Lauren Donovan has worked in online marketing since 2006, specializing in content generation, organic social media, community management, real-time journalism, and holistic social befriending. She currently serves as the Content Marketing Manager at Third Door Media, parent company to Search Engine Land, Marketing Land, MarTech Today, SMX, and The MarTech Conference.


Shutterstock has launched an unlimited music subscription plan for content creators and digital marketers, offering more than 11,000 tracks that can be included in web-based content, including YouTube videos, podcasts and conference presentations.

The subscription fee is $149 per month, and includes access to the Shutterstock Music library with music tracks searchable by genre, mood or popularity. The company says its music selection has been curated by professional musicians, with hundreds of tracks added every month.

Why we should care

This latest offering from Shutterstock gives digital marketers and content creators the ability to spice up their content — putting a professional shine on things like podcast intros, YouTube ads or conference and trade show presentations. At $149 per month, it’s a cost-effective feature for marketers lacking the budget and resources to invest in high-end music productions for various projects.

“Our new unlimited licensing option empowers creators to license music as their needs arise and frees them to focus on the creative vision rather worrying about budget,” said Shutterstock VP of Product Christopher Cosentino.

Shutterstock is also adding “shorts” and “loops” music offerings to all of their licensing plans, making available shortened versions of a song (shorts) or segments of a longer song that repeats indefinitely (loops).

More on the news

  • The newly added shorts and loops come at no extra cost with all of Shutterstock plans. The shorts offer 15-second, 30-second and 60-second versions of songs.
  • Shutterstock now boasts a community of more than one million contributors, with hundreds of thousands of images added every week.
  • The image, video and now music licensing site has more than 280 million images and more than 16 million video clips available.

About The Author

Amy Gesenhues is a senior editor for Third Door Media, covering the latest news and updates for Marketing Land, Search Engine Land and MarTech Today. From 2009 to 2012, she was an award-winning syndicated columnist for a number of daily newspapers from New York to Texas. With more than ten years of marketing management experience, she has contributed to a variety of traditional and online publications, including MarketingProfs, SoftwareCEO, and Sales and Marketing Management Magazine. Read more of Amy’s articles.


For the general public, the term “influencers” usually brings up one of two images:

  • A YouTube star who promotes products on Instagram they get for free
  • The [expletive] who helped promote the Fyre Festival

For those of us in e-commerce, influencers have just as dubious a reputation, either as gatekeepers to a coveted audience or liars who pad their followers and steal your money. 

Whenever a brand uses the influencer marketing strategy, the marketing team is looking to improve brand affinity, increase revenue growth, develop content creation or some combination. However, influencers are steroids, sometimes they inject a quick burst of needed energy and sometimes they do nothing but cause cancer.

Online marketing professionals, media buyers, executives and the like have to realize that there are some hard truths about going the influencer route. Rarely does the marketing team ask where targeted consumers are spending their time; and, really, shouldn’t that be the first question they ask?

Having someone impressive or popular to promote your product might feed your ego, but does it get measurable results?

Another hard question is – will any of the created content drive the conversation? Whether the product is makeup, a new holiday toy, tax software or whatever, the conversation has to be substantive and of value. Marketing dollars spent must have a measurable and reliable ROI otherwise what is the point?

So, what to do about influencers?

Look, influencers can be effective. A recent study showed that 35% of adults online use Instagram which is tens of millions of people. In 2017, the influencer market hit over $1 billion globally, it will hit $8.5 billion this year and hit $10 billion in 2020; so obviously there is something to it. But, if a marketing campaign is targeted for true value (which it always should be) then the issue is macro versus micro. A recent audit by The Social Chain of 10,000 influencers found that 25% of their followers were involved in fraud, either through buying followers, faking engagement or some other sleight-of-hand behavior.

If some rando housewife in Minnesota claims three million followers, you should probably do some homework before giving her a six-figure deal to promote your product.

This fraud and abuse wisely gave rise to the use of “micro-influencers.” These are Instagram users with over 10,000 followers on any one social media outlet, but less than 500,000. The numbers aren’t staggering, but they are measurable. With one micro-influencer, you could spend $1,000 to reach 50,000 people. With a macro, you could spend $50,000 to reach 1 million followers. The spend is way more impactful if you’re willing to be more targeted.

The key to any good micro-influencer though isn’t the size of the audience, but the engagement. Even celebrities with ten million Twitter or Instagram followings are not helpful to a marketing campaign if their audience is flat, detached and uninterested. But a micro-influencer, say a 21-year-old college junior with 100,000 die-hard followers who love her posts about the best workout towels and worst toenail polish, is way more valuable because you can trust her audience and measure the ROI when you invest money into a campaign involving her.

The other key to using influencers is finding those that work best on mobile devices. The desktop advertising dollars have plateaued and with good reason. People stuck in corporate jobs use desktops and seniors stuck in their homes use desktops. The dynamic audience has followers using their phones or tablets as often as possible. 

Why does this obvious fact matter? Because even a micro-influencer has to be measured by their audience quality and participation, not the numbers. One influencer with 100,000 followers who are mainly at their desk watching YouTube instead of doing paralegal work is less valuable to a shoe company than someone with 25,000 users who are 25 years old and work in non-office jobs. 

Use influencers if it makes sense for your campaign and your product, but only you’re willing to do research and background on the potential followers. An easy exercise is to go to your preferred micro-influencer Instagram account, choose five followers at random and see if those accounts belong to real people. Measuring if they’re legit can be that easy. 

In the old days, having a big handsome quarterback promote your car was great, but that didn’t mean results. Such timeless taglines as “where’s the beef” and “time to make the donuts” happened without stars, without big names and with simple, measurable, creative marketing.

At the end of the day, use a campaign that works for your brand, not what’s flashy or what makes you feel good. Profits should drive decisions, brand recognition and month-to-month, year-over-year profit increases. This is basic business sense, but something all of us in e-commerce need to go back to every day. 

Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.

About The Author

For the past 7 years, Nick has been lead to large teams and small in the paid media landscape, working with brands from PepsiCo and Apple to fresh and young brands like Ministry of Supply and Diff Eyewear. All things paid media and performance-related within the ecommerce landscape are what Nick focuses on these days.


The news: Governments worldwide are increasingly using social media to manipulate elections and spy on citizens, think tank Freedom House has warned in its latest report. It’s the ninth year in a row that global internet freedom has dropped, according to its assessment of 65 countries.

A new menace: Disinformation—false information spread deliberately to deceive people—helped distort elections in 26 of the 30 countries studied that had national votes in the last year. Outright censorship and internet shutdowns persist, but many governments find it more effective to employ individuals to spread online propaganda, facilitated by social-media platforms like Facebook, Twitter, Instagram, and YouTube, the report said.

Some figures: Of the 65 countries studied, half had an overall decline in their internet freedom score, while just 16 registered improvements. A majority were affected by advanced social-media surveillance programs, with law enforcement in 47 countries arresting people for political, social, or religious speech online.

A glowing example: Iceland was the world’s best protector of internet freedom, having registered no cases against users for online expression over the last year. It has near-universal connectivity, strong protections for users’ rights online, and limited restrictions on content, the report noted. Estonia and Canada took second and third place, respectively. Unsurprisingly, China was the worst performer.

What can be done? “The future of internet freedom rests on our ability to fix social media,” Freedom House research director Adrian Shahbaz said. Since all the big tech companies are based in the US, the report says solutions must come from the US, too. The report recommends that lawmakers improve transparency and oversight of online political ads, enact robust data privacy legislation, and strictly regulate the use of social-media surveillance tools by law enforcement, among other measures.

A timely warning: The report comes as Facebook is facing calls to change its policy of allowing politicians to advertise lies on its platforms. It’s come under renewed pressure thanks to Twitter’s decision to ban political ads last week. There are concerns in the UK, for example, that Google, Facebook, and other tech platforms will be used by bad actors to manipulate the upcoming general election on December 12. Freedom House has a stark warning: without action, the situation is likely to only get worse.

Sign up here for our daily newsletter The Download to get your dose of the latest must-read news from the world of emerging tech.

Collection of free jQuery menu examples and plugins.

Collection of free jQuery menu examples and plugins.

  1. jQuery Context Menus
  2. jQuery Fullscreen Menus
  3. jQuery Mega Menus
  4. jQuery Off-Canvas Menus
  5. jQuery Sliding Menus
  6. jQuery Scrolling Menus
  7. React Menus
  8. CSS Menus
  1. jQuery Menu Examples
  2. jQuery Menu Plugins


  • Joshua Ward

Made with

  • HTML (Pug) / CSS (SCSS) / JS

About a code

One Page Scrolly Thing

Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

Responsive: yes

Dependencies: jquery.js


  • Shosuke Doi

Made with

  • HTML / CSS / JS

About a code

Mobile Nav Bar

Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

Responsive: yes

Dependencies: jquery.js


  • creme

Made with

  • HTML / CSS (SCSS) / JS

About a code

Uber-like Navigation

Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

Responsive: no

Dependencies: jquery.js, tweenmax.js


  • Jessica Valeska da Silva

Made with

  • HTML / CSS (SCSS) / JS

About a code

Responsive Menu Scroll Left

Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

Responsive: yes

Dependencies: font-awesome.css, jquery.js


  • Leon Heess

Made with

  • HTML / CSS / JS

About a code

Droplet Menu Animation

Droplet scroll animation for the hamburger menu in jQuery.

Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

Responsive: no

Dependencies: jquery.js


  • Dejan Babić

Made with

  • HTML / CSS (SCSS) / JS

About a code

Menu with Morphing Shapes

jQuery menu with morphing shapes.

Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

Responsive: yes

Dependencies: jquery.js, tweenmax.js, morphsvgplugin.js


  • Yiting Liu

Made with

  • HTML / CSS (SCSS) / JS

About a code


Horizontal jQuery menu.

Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

Responsive: yes

Dependencies: jquery.js


  • Tobias Glaus

Made with

  • HTML / CSS (SCSS) / JS

About a code

Navigation with Scrolling

Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

Responsive: no

Dependencies: jquery.js


  • Jon Wilcox

Made with

  • HTML / CSS (Less) / JS

About a code

Expanding and Collapsing Hamburger Menu

Experimenting with making the menu items themselves collapse into the bars of the hamburger menu.

Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

Responsive: no

Dependencies: jquery.js

Made with

  • HTML / CSS / JS

About a code

Hamburger Menu

CSS and jQuery hamburger menu.

Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

Responsive: no

Dependencies: jquery.js


  • Giorgio Acquati

Made with

  • HTML / CSS (SCSS) / JS

About a code


Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

Responsive: yes

Dependencies: jquery.js


  • Dejan Babić

Made with

  • HTML / CSS (SCSS) / JS

About a code

Mobile Navigation

Mobile navigation animation using GSAP.

Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

Responsive: no

Dependencies: jquery.js, tweenmax.js, morphsvgplugin.js


  • Praveen Kumar Gorakala

Made with

  • HTML / CSS (SCSS) / JS

About a code

Radial Menu

Radial menu – creator visons.

Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

Responsive: no

Dependencies: bootstrap.css, jquery.js, font-awesome.js


  • Jordan

Made with

  • HTML / CSS / JS

About a code

Side Menu Animation

Side menu animation in jQuery.

Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

Responsive: yes

Dependencies: jquery.js


  • Aaron Awad

Made with

  • HTML / CSS / JS

About a code

Multi-level Side Nav Menu

This is a multi-level side navigation pattern with hover and push. Hovering over the menu will reveal its lables and clicking the hamburger icon pins the menu open.

Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

Responsive: no

Dependencies: font-awesome.css, jquery.js


  • Praveen Bisht

Made with

  • HTML (Pug) / CSS (SCSS) / JS

About a code

Slide Out Navigation Menu

Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

Responsive: yes

Dependencies: jquery.js


  • Dronca Raul

Made with

  • HTML / CSS (SCSS) / JS

About a code

Menu Interaction

Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

Responsive: no

Dependencies: jquery.js, feather.js


  • Aaron Taylor

Made with

  • HTML / CSS (SCSS) / JS

About a code

Burger Menu

This is a burger menu for an upcoming project I am working on. I have tried to focus on the animations to give that polished feel when you open the menu item.

Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

Responsive: no

Dependencies: jquery.js


  • Dogacan

Made with

  • HTML (Pug) / CSS (Sass) / JS

About a code

Mobile Dropdown Menu

Mobile dropdown menu example with keyframe animations and transitions.

Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

Responsive: no

Dependencies: jquery.js


  • Ryan Mulligan

Made with

  • HTML (Pug) / CSS (SCSS) / JS

About a code

Off Canvas Mobile Submenu

An experimentation that sets all submenus off screen so that they can be toggled in and out of view when their parent arrow elements are clicked.

Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

Responsive: no

Dependencies: jquery.js


  • Tim Normington

Made with

  • HTML / CSS (SCSS) / JS

About a code

Diagonal Mega Menu

Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

Responsive: no

Dependencies: font-awesome.css, jquery.js, jquery-ui.js, tweenmax.js


  • Alberto

Made with

  • HTML / CSS / JS

About a code

Multi-Device Scrolling Menu

A simple responsive horizontal menu ready for any device. For the sample only the background colour will change between countries.

Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

Responsive: yes

Dependencies: jquery.js, jquery-ui.js


  • James Mejia

Made with

  • HTML / CSS (SCSS) / JS

About a code

Responsive Mega Dropdown

Original problem was having to create a mega dropdown that was responsive. The design required each column to have a border and have chunks of links inside of that. This is a mobile-first solution that gives you markup that makes sense for mobile/desktop using data attributes.

Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

Responsive: yes

Dependencies: ionicons.css, jquery.js, enquire.js


  • J Scott Smith

Made with

  • HTML / CSS (SCSS) / JS

About a code

Sidebar Menu Concept

Sidebar menu that pushes content aside in 3D space. Still prettifying things.

Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

Responsive: no

Dependencies: jquery.js


  • Timofey

Made with

  • HTML / CSS (SCSS) / JS

About a code

Dota 2 Wheel Chat

With this module, you can create wheel menu, like such in Dota 2 game. Press left mouse button, to see it. Choose one of menu items, to say something to your teammates!

Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

Responsive: no

Dependencies: jquery.js, wheel-menu.js


  • Andrey Pokrovskiy

Made with

  • HTML / CSS (Less) / JS

About a code

Circular Layered Menu

Circular, vinyl-like menu with explanatory label that pop when icons are hovered.

Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

Responsive: no

Dependencies: font-awesome.css, jquery.js, tweenmax.js


  • Chrysto

Made with

  • HTML / CSS / JS

About a code

Lastik Menu

Simple lava-lamp like horizontal menu, using jQuery and Tweenmax.

Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

Responsive: no

Dependencies: jquery.js, tweenmax.js

Demo image: Hiraku.js

Github activity

  • 65 stars
  • 12 watchers

About the plugin


Hiraku.js – jQuery offcanvas menu plugin.

Demo image: Sidebar Menu

Made by

  • huang.xinghui

Github activity

  • 42 stars
  • 4 watchers

About the plugin

Sidebar Menu

Sidebar menu jQuery component base on AdminLTE.

Demo image: Slide And Swipe Menu

Github activity

  • 96 stars
  • 12 watchers

About the plugin

Slide And Swipe Menu

A sliding swipe menu that works with touchSwipe library.

Demo image: Drawer

Made by

  • Yasuyuki Enomoto

Github activity

  • 477 stars
  • 29 watchers

About the plugin


Flexible drawer menu using jQuery, iScroll and CSS.

Demo image: Slinky

Github activity

  • 430 stars
  • 20 watchers

About the plugin


A light-weight, responsive, mobile-like navigation menu plugin.

Demo image: Tendina

Made by

  • Ivan Prignano

Github activity

  • 139 stars
  • 15 watchers

About the plugin


Tendina is a simple jQuery plugin that helps you build dynamic, interactive side-menus in seconds.

Demo image: Bootstrap Submenu

Made by

  • Vasilii Artemchuk

Github activity

  • 343 stars
  • 40 watchers

About the plugin

Bootstrap Submenu

Bootstrap sub-menus.

Demo image: stickyNavbar.js

Github activity

  • 278 stars
  • 17 watchers

About the plugin


StickyNavbar.js: fancy sticky navigation jQuery plugin with smart anchor links highlighting.

Demo image: Box Lid Menu

Made by

  • James Lim

Github activity

  • 43 stars
  • 9 watchers

About the plugin

Box Lid Menu

This jQuery plugin creates the box lid effect for navigation menus.

Demo image: stickUp

Github activity

  • 1531 stars
  • 100 watchers

About the plugin


StickUp a jQuery Plugin for sticky navigation menus.


  • simple integration
  • the “one pager”
Demo image: scrollNav

Github activity

  • 394 stars
  • 25 watchers

About the plugin


A jQuery plugin for building a scrolling navigation menu.

Demo image: bigSlide.js

Github activity

  • 392 stars
  • 19 watchers

About the plugin


bigSlide is a teeny tiny (~1kb compressed) jQuery plugin for creating off-screen slide panel navigation.

Demo image: offCanvasMenu

Github activity

  • 150 stars
  • 13 watchers

About the plugin


offCanvasMenu is a jQuery/Zepto plugin that provides an easy way to implement an off-canvas toggling menu, a navigation metaphor made popular by mobile applications.

Demo image: Metismenu

Made by

  • Osman Nuri Okumuş

Github activity

  • 1508 stars
  • 106 watchers

About the plugin


A jQuery menu plugin.

Demo image: jQuery Superfish Dropdown Menu Plugin

Github activity

  • 856 stars
  • 84 watchers

About the plugin

jQuery Superfish Dropdown Menu Plugin

Superfish is a jQuery plugin that adds usability enhancements to existing multi-level drop-down menus.

Demo image: SlickNav

Github activity

  • 824 stars
  • 66 watchers

About the plugin


Responsive mobile menu jQuery plugin.

Demo image: SmartMenus

Github activity

  • 478 stars
  • 51 watchers

About the plugin


Advanced jQuery website menu plugin. Mobile first, responsive and accessible list-based website menus that work on all devices.

Demo image: SlimMenu

Github activity

  • 299 stars
  • 34 watchers

About the plugin


SlimMenu is a lightweight jQuery plugin, which is made to create responsive and multi-level navigation menus on the fly.

Demo image: jQuery.Dropotron

Github activity

  • 123 stars
  • 15 watchers

About the plugin


jquery.dropotron: multilevel dropdown menus for jQuery.

Demo image: Sidr

Made by

  • Alberto Varela

Github activity

  • 2921 stars
  • 154 watchers

About the plugin


Sidr is a jQuery plugin for creating side menus and the easiest way for doing your menu responsive. It uses CSS3 transitions in modern browsers and fallbacks into jQuery.animate when they are not supported.

Demo image: Fixed Table of Contents Drop-Down Menu jQuery Plugin

Made by

  • Louis Lazaris

Github activity

  • 53 stars
  • 0 watchers

About the plugin

Fixed Table of Contents Drop-Down Menu jQuery Plugin

A fixed table of contents drop-down menu jQuery plugin. Just change the markup to add more depth to the menus. The script should automatically recognize deeper menus, as long as you have the corresponding sections in the content area.

Demo image: jPanelMenu

Made by

  • Anthony Colangelo

Github activity

  • 956 stars
  • 86 watchers

About the plugin


A jQuery plugin that creates a paneled-style menu.

Demo image: visualNav

Github activity

  • 68 stars
  • 8 watchers

About the plugin


A jQuery plugin that modifies a navigation menu to highlight/change when the menu’s target smooth scrolls into view.


© 2019 Dribbble. All rights reserved.

Made with ♥ remotely from

Salem, MA
Walnut Creek, CA
Victoria, BC
Centennial, CO
Bournemouth, UK
Vancouver, BC
Montreal, QC
Roseville, MN
Rome, GA
Peterborough, NH
Oakland, CA
Austin, TX
Mystic, CT
Saint Charles, MO
Des Moines, IA
Salt Lake City, UT
Lander, WY
Cote St Luc, QC
Womelsdorf, PA
Minneapolis, MN
Highlands Ranch, CO
San Francisco, CA
Silver Spring, MD
London, ON
Pottstown, PA
Phoenix, AZ
Sacramento, CA
Farmers Branch, TX
Marina del Rey, CA
Murray, UT
Orlando, FL
Paris, France
Brookline, MA
Los Angeles, CA
San Rafael, CA
San Luis Obispo, CA
Atlanta, GA
Tucson, AZ


  • The modern MacBook Pro ranks among Apple's most divisive devices, but that wasn't always the case. We'll keep our fingers crossed that this new model represents a return to form. Specs on our teardown victim unit include:162.6 GHz 6‑core Intel Core i7 (Turbo Boost up to 4.5 GHz), paired with an AMD Radeon Pro 5300M

    • The modern MacBook Pro ranks among Apple’s most divisive devices, but that wasn’t always the case. We’ll keep our fingers crossed that this new model represents a return to form. Specs on our teardown victim unit include:

    • 16″ LED‑backlit IPS Retina display with True Tone, 3072 × 1920 resolution (226 ppi)

    • 2.6 GHz 6‑core Intel Core i7 (Turbo Boost up to 4.5 GHz), paired with an AMD Radeon Pro 5300M

    • 16 GB of 2666 MHz DDR4 SDRAM

    • 512 GB SSD

    • 100 Wh battery

    • Six-speaker sound system and high-performance microphone array

    • From the outside, Apple doesn’t give much away—at the very least, they haven’t facelifted this design just to attract impulse buyers. The only real clue comes from the new model number: A2141 and EMC 3347.

  • Stacked up like pancakes, you'll notice the new MacBook Pro is noticeably bigger (and just a hair thicker) than the latest 15With the 16In a similar game of

    • Stacked up like pancakes, you’ll notice the new MacBook Pro is noticeably bigger (and just a hair thicker) than the latest 15″ model. It actually compares well with the 2015 model—in size that is, not port selection.

    • With the 16″ Pro back-to-back against the 2015 Pro, you’re almost seeing double … until you look closer. The display hinge has changed, and the old Pro’s keys peek out just a bit more.

    • In a similar game of “spot the differences” between the two 2019 MacBook Pros, the 16″ version gets:

    • A physical Esc key, separated from the Touch Bar

    • A matte finish on the Touch ID/power button, matching the rest of the keys

    • A Touch Bar shortened in length by about 19 mm

    • Inverted-T arrow keys, all half-sized

    • If you zoom and enhance the left grille, you can see the camouflaged holes for the improved triple-mic. And now, you can never un-see it.

  • Time for us to do what we do best: jam tools into things and see what comes apart.Let's start with some keycaps. Remember the iMac's Magic Keyboard? It's a well-liked, reliable design that Apple calls the That might be understating it slightly: side by side, we're hard pressed to spot any differences. Scissor switches, keycaps... There's slightly less space surrounding these new keys, and pundits will celebrate those reconfigured arrow keys—but everything else looks nigh identical.

    • Time for us to do what we do best: jam tools into things and see what comes apart.

    • Let’s start with some keycaps. Remember the iMac’s Magic Keyboard? It’s a well-liked, reliable design that Apple calls the “core technology” for the redesigned keyboard in this new machine.

    • That might be understating it slightly: side by side, we’re hard pressed to spot any differences. Scissor switches, keycaps… There’s slightly less space surrounding these new keys, and pundits will celebrate those reconfigured arrow keys—but everything else looks nigh identical.

    • News flash: there’s not even a dust-proofing membrane on these new switches. We’re inclined to take this as a very good sign. (It means we can finally eat Doritos during teardowns again.)

    • To round out the comparison, here’s a third image showing the controversial (and notoriously temperamental) “butterfly” switch in the 15″ MacBook Pro released just a few months ago.

    • But why stop there? Here’s a shot of the scissor switch in the “unibody” 2012 MacBook Pro design…

    • …and the scissor in Apple’s last laptop with a “good” keyboard, the 2015 MacBook Pro with Retina display.

  • What's underneath the scissor mechanism? Hey, we'll ask the questions here—you just finish taking this thing apart.Wait, scratch that—reverse it.Here's a look at the

    • What’s underneath the scissor mechanism? Hey, we’ll ask the questions here—you just finish taking this thing apart.

    • Wait, scratch that—reverse it.

    • Here’s a look at the “Apple-designed rubber dome that stores more potential energy for a responsive key press.”

    • Beneath that, a backlight assembly with an interesting pattern—more on that later.

    • And bordering all that, a thin black gasket—presumably to stop the bright bits from bleeding out past the edge of the key.

    • Compared once again with the desktop Magic Keyboard:

    • The two scissor mechanisms look nearly identical. The old Magic scissor is ever-so-slightly thicker (1.6 vs 1.38 mm).

    • 0.22 mm may not seem like much, but no doubt a lot of engineering went into the re-creation of this slender new scissor switch.

  • Despite the differential in thickness, these keyboards look so similar that we're tempted to mix dark Magic with light and see what happens...*Heavenly chorus* The Creation of Magic

    • Despite the differential in thickness, these keyboards look so similar that we’re tempted to mix dark Magic with light and see what happens…

    • *Heavenly chorus* The Creation of Magic

    • Yes, what you’re looking at is a 2015 white Magic Keyboard keycap on a 16″ MacBook Pro keyboard and scissor. It works!

    • The old Magic command key is smaller (hence the larger gap around the key) and thicker than the new one.

    • Not every single key is interchangeable, and the old Magic keys don’t have transparent lettering for the backlight. So don’t get too excited yet, keyboard modders.

  • Having finished poking at the keyboard (for now), let's de-lid the chassis and look inside.Apple already showed us this angry owl in a GIF from their press release, but what we haven't seen yet are high-energy X-rays passing through the owl to show us what's behind its frustrated facade.The X-rays make it even angrier! Run away! Luckily our friends at Creative Electron are taking all the risk here—we'll resume disassembly after things calm down.

    • Having finished poking at the keyboard (for now), let’s de-lid the chassis and look inside.

    • Apple already showed us this angry owl in a GIF from their press release, but what we haven’t seen yet are high-energy X-rays passing through the owl to show us what’s behind its frustrated facade.

    • The X-rays make it even angrier! Run away! Luckily our friends at Creative Electron are taking all the risk here—we’ll resume disassembly after things calm down.

    • Torx screws, schmorx screws—our Marlin driver set does it all. (Although come to think of it, we’ve yet to find a use for the schmorx bits.)

    • Even with this computer open, we’re faced with more mystery—a masked MacBook! Is it you, Bruce?

    • Apple touts major advancements in this MacBook Pro’s thermal design, and says this heat sink is 35 percent bigger. (We’d have guessed 34, but let’s take their word for it.)

    • We also found some upgraded hardware covering what we presume to be fast new GDDR6 chips—a copper shield and a couple aluminum thermal pads.

  • We can't stop thinking of these logic boards as mustache-shaped, so let's take the anthropomorphic analogy way way too far and comb through the bristles. Here's what sticks to our comb:9th-generation Intel Core i7-9750H 6-core processor

    • We can’t stop thinking of these logic boards as mustache-shaped, so let’s take the anthropomorphic analogy way way too far and comb through the bristles. Here’s what sticks to our comb:

    • 9th-generation Intel Core i7-9750H 6-core processor

    • 16x Micron MT40A1G8SA-075 8 Gb DDR4 SDRAM (16 GB total)

    • AMD Radeon Pro 5300M mobile GPU

    • 4x Samsung K4Z80325BC-HC14 8 Gb GDDR6 RAM (4 GB total)

    • Toshiba TSB4227VE8434CHNA11926 and TSB4227VE8437CHNA11926 flash storage (512 GB total)

    • Apple T2 APL1027 339S00536 coprocessor

    • Intel JHL7540 Thunderbolt 3 controller

  • We continue combing for silicon. Silicon comes from sand. Have we been doing this all wrong?Intel SR40F platform controller hub

    • We continue combing for silicon. Silicon comes from sand. Have we been doing this all wrong?

    • Intel SR40F platform controller hub

    • Texas Instruments CD3217B12 (likely power controllers)

    • 338S00267-A0 (likely Apple PMIC)

    • Texas Instruments TPS51980B power controller

    • 339S00610 (likely Apple Wi-Fi/Bluetooth module)

    • Intersil 6277 PWM modulator

    • Renesas 225101C

  • With the motherboard out of the way, we can take a peek at the rest of the improved thermal design.Pardon us while we vent for a minute. Stacked atop its slightly-older sibling (the 15-inch 2019 model), the new MacBook Pro has some noticeably larger exhaust holes.That combined with these new fans,  which feature bigger impeller blades, pushes 28% more air through the new Pro.

    • With the motherboard out of the way, we can take a peek at the rest of the improved thermal design.

    • Pardon us while we vent for a minute. Stacked atop its slightly-older sibling (the 15-inch 2019 model), the new MacBook Pro has some noticeably larger exhaust holes.

    • That combined with these new fans, which feature bigger impeller blades, pushes 28% more air through the new Pro.

    • This may sound like a bunch of hot air, but all these cool improvements allegedly allow this 16″ Pro to sustain up to 12 more watts during intensive workloads than the previous design.

  • We slowly peel away the keyboard backlight, hoping to find Magic underneath.The backlight assembly consists of a flexible diffuser, which carries light from LEDs along the edges. Flexible diffusers are common, but this one looks specially etched, possibly to maximize brightness and evenness.Underneath the backlight, we spot some Pentalobe P2 screws along the edges of the metal keyboard frame. Could it be? Can we unscrew this thing to swap a busted key after a Dorito-fueled type-a-thon?

    • We slowly peel away the keyboard backlight, hoping to find Magic underneath.

    • The backlight assembly consists of a flexible diffuser, which carries light from LEDs along the edges. Flexible diffusers are common, but this one looks specially etched, possibly to maximize brightness and evenness.

    • Underneath the backlight, we spot some Pentalobe P2 screws along the edges of the metal keyboard frame. Could it be? Can we unscrew this thing to swap a busted key after a Dorito-fueled type-a-thon?

    • Nooooo! Once again, the keyboard assembly is riveted down. Though the switches are likely less vulnerable to crumbly assailants, the keyboard itself isn’t any more repairable than the Butterfly boards.

    • It’s basically 2016 all over again: we’ve got a new keyboard married to a non-serviceable design, with only Apple’s word that it “won’t break.” And this one isn’t even a part of the Keyboard Service Program, so … : /

    • The one glimmer of hope is that this new keyboard design is extremely similar to past Apple keyboards that have mostly withstood the test of time.

    • Refusing to leave this keyboard interaction empty-handed, we pry up the keyboard as much as we can to get a glimpse at what makes it click. We are rewarded with a look at the flexible PCB layer sandwiched between the key and the metal backplate where the button presses happen … but no magic.

  • Now that all our keyboard questions are laid to rest, we can move on to the rest of the bits still lying around.First up, these fancy new XL-Earpods speakers with an extended enclosure, and opposed woofers on the top and bottom.The opposed woofers are supposed to cancel each other's vibration out. Kinda like the anti-noise feature of the new AirPods Pro, but ... with bass. Anti-bass?

    • Now that all our keyboard questions are laid to rest, we can move on to the rest of the bits still lying around.

    • First up, these fancy new XL-Earpods speakers with an extended enclosure, and opposed woofers on the top and bottom.

    • The opposed woofers are supposed to cancel each other’s vibration out. Kinda like the anti-noise feature of the new AirPods Pro, but … with bass. Anti-bass?

    • We aren’t as sure about the elongated enclosure, but our best guess is that the new shape redirects sound to improve its quality. Maybe extra volume was needed to accommodate the second woofer?

    • Next, another sound upgrade: a “high performance” three-mic array. It’s laid out similarly to the three-mic array in the 2018 MacBook Air, but looks a little beefier, which could mean these are higher-quality mics.

    • Finally, the lithium powerhouse that’s probably drawing a suspicious glare from the FAA: Apple’s 99.8 Wh battery (11.36 V, 8790 mAh).

    • That’s the largest battery we’ve ever seen in a MacBook—a 4.8 Wh increase over the 17″ MacBook Pro, and a huge 16.2 Wh increase over the latest 15″ MacBook Pro.

    • Where did this extra capacity come from? Here, Apple pulled an iPhone 11—each battery cell looks identical to its predecessor, but measures 0.8 mm thicker on average.

  • Teardown Update: Hey, what's this little flex cable doing here? We don't remember seeing this before, and MacRumors was kind enough to ask us about it.We have to fully remove the display to investigate, but it turns out there's a new sensor facing the left-side hinge, and a magnet embedded in the hinge itself. The arrow on the magnet indicates alignment/polarity, which rotates in sync with the display.Looks like Apple added a lid angle sensor. Why would they need that? Will this enable some cool new macOS feature?

    • Teardown Update: Hey, what’s this little flex cable doing here? We don’t remember seeing this before, and MacRumors was kind enough to ask us about it.

    • We have to fully remove the display to investigate, but it turns out there’s a new sensor facing the left-side hinge, and a magnet embedded in the hinge itself. The arrow on the magnet indicates alignment/polarity, which rotates in sync with the display.

    • Looks like Apple added a lid angle sensor. Why would they need that? Will this enable some cool new macOS feature?

    • Maybe. It could also be a clever way for Apple to trace the history of how the device was used if repairs are needed. For instance, if the display cables fray, Apple might want to know how much the display was adjusted, and how often.

    • MacRumors did some additional gumshoeing, and found out that Apple wants this little sensor calibrated after repairs. What do you suppose it’s for?

  • Let's lay out our feelings for this new MacBook Pro:

    • Let’s lay out our feelings for this new MacBook Pro:

    • Though the fancy new-old scissor mechanisms may be more durable than fragile butterflies, the whole keyboard assembly is still riveted in place.

    • The revamped thermal system is a welcome improvement to the notoriously hot-and-throttled Pro lineup.

    • The 99.8 Wh battery flies just under the FAA radar, pumping out 16.2 Wh more capacity than the most recent 15” model. With such a slight increase in case size, this capacity increase is surprising.

    • Despite the thumping new speakers, the podcast-quality mic array, and that giant screen, we can’t help but feel that Apple can do better—especially after seeing Microsoft perform some real engineering magic to make its latest laptops more repairable. Here’s hoping that the 2020 Pro linup will really take our breath away.

    • For now though, the only thing left to do is give this thing a score.

  • Final Thoughts

    • The trackpad can still be removed and replaced with very little drama.
    • Minor components are modular, but the processor, RAM, and flash memory are soldered to the logic board.
    • Glue and/or rivets secure the keyboard, battery, speakers, and Touch Bar, making those components a tricky fix.
    • The Touch ID sensor is the power switch and is locked to the logic board, greatly complicating repairs.

    Repairability Score


    Repairability 1 out of 10

    (10 is easiest to repair)


    Zack Brown

    The Haiku team is overjoyed to launch Diez open-source today after a year of development and several months of an Early Access beta.

    Diez is an open-source design language framework. It allows you to maintain a centralized source of truth for your visual identity (a.k.a. your brand) — in a way that seamlessly integrates with any native iOS, Android, or Web codebase.

    Update once, update everywhere — while embracing native platforms. Diez radically reduces the cost of delivering a consistent visual identity across your company’s apps & websites.

    During the beta some brilliant folks have been piloting Diez into their design systems, from companies like Accenture, NBC/Universal, The Guardian, Cisco, Patreon, P&G, Atlassian, and McKinsey. (📢 shout-out to our beta testers! ❤️)

    Iterating with these teams’ feedback, Diez has become a lean, mean design token machine: tailored for the needs of the world’s largest brands.

    Check it out at

    Ain’t yo’ grandma’s JSON

    Today we announce a dramatic iteration on Diez’s design-token authoring format, a new syntax that’s best described as “typed JSON.”

    Just think of writing Diez now as “JSON with autocomplete.” (But also with: imports, modularity, references, code reuse, scalability, inline documentation…)

    Diez is now familiar & accessibile to non-developers, while leveraging a type system that turns your code-editor into a figurative hovercraft.

    Design language docs with Diez DocsGen

    The Diez DocsGen add-on for Diez is available starting today as part of Diez Enterprise Edition. From any Diez codebase, generate interactive, searchable, up-to-the-moment design language docs.






    from New Zealand with 7.45

    • Design

    • Usability

    • Creativity

    • Content

    Noiz! is a musical soundboard game that encourages kids’ creativity by letting them explore and experiment with sound. Created for children’s destination HeiHei and funded by NZ on Air.

    • Alexis Malin

      Digital Art Director at Sid Lee

    • Pedro Burneiko

      Head Of Digital Art at ALMAPBBDO

    • Francesco Milanesio

      Creative Director at Bitmama

    • Robert Berki

      89colors – Founder/Product Designer

    • Louis Paquet

      Creative Director at Locomotive

    • Pablo Vio

      Co-founder and Creative Director at jam3

    • Roman Trilo

      Freelance Interactive Designer at UNIT9

    • Edgar De la Cruz

      Elephant – Design Lead, Art

    • Zhenya Rynzhuk

      Co-founder of Design Thinking School Sochnik

    • Alex Christian

      Product Design Director at AKQA San Francisco

    • Kiran CK

      Moonraft Innovation Labs – Creative Director

    • Matteo Rostagno

      Senior Designer at Mediamonks

    • Awwwards Users

      Awwwards Users – Pro and Chief

    • Younus Abdalla

      Independent Interactive Developer

    • Alaa Alnuaimi

      Creative Director and Founder at -99 design studio

    • Lennard van Diggelen

      Senior Front-end Developer @ Sennep Ltd

      • Daphné Launay - Portfolio



        From France

        November 18, 2019

      • andstudio



        From Lithuania

        November 17, 2019

      • Fishing the Feed



        From United Kingdom

        November 16, 2019

      • Beauvoir



        From Canada

        November 15, 2019

      Jury vote eliminated

      Sites are sent to a minimum of 16 Design Jury members, the two scores furthest from the average will be automatically eliminated by our system. SOTD will then be sent to the Developer Jury to be evaluated following the Developer Guidelines, find more info here.


      I’ve been using CSS as a web developer since CSS became something we could actually use. My first websites were built using tags and

      s for layout. I remember arguments about whether this whole CSS thing was a good idea at all. I was quickly convinced, mostly due to the ability to easily change the font on an entire site in one place. Managing common styles was so useful at a time when most websites were just a stack of HTML pages with no management of content or any form of templating. I was an early adopter of using CSS rather than tables for layout, despite the backdrop of people asking, “but what about Netscape 4?”

      CSS is a remarkable language. Those early sites were developed in a time where the best we standards advocates hoped for was that browsers would support the CSS that existed; that developers would validate their HTML and CSS and use the CSS that existed. Yet, a website built back then that is still online, or one accessed via the Wayback Machine will still work in a modern browser. Such is the care that has been taken to not break the web by the CSS Working Group, and the other groups working to add features to the web platform.

      I’ve been teaching CSS for almost as long as I’ve been using CSS. I’m incapable of having a thought without turning it into words. I write things down to remember them, I write things down to help them make sense to me. This leaves me with a lot of words, and from the earliest days of my career I had an idea that they might be useful to other people and so I started to publish them. Over the years I’ve learned how to teach people, discovered the things which seem to help the various concepts click for folk with different ways of learning and processing information. Since the early days of CSS layout, we’ve been teaching it along the following lines.

      • this is a block thing
      • this is an inline thing
      • you can turn the block things into inline things and vice versa using the display property
      • this is the Box Model, it is very important and also kind of weird.

      Typically we would teach CSS by jumping right in, styling up a layout and explaining the strange collection of hacks that allowed for a layout as we went along. Unlike other languages, where we might start with the core fundamentals of programming, in CSS we had very few concepts to teach outside of building things and explaining the weirdness in the context of actual layouts. The Box Model was important because it was all we really had in terms of layout. It was core to our method of giving things a size and pushing them around in a way that would allow them to line up with other carefully sized things to make something that looked like a grid. If you didn’t understand the standard Box Model, and that the width you set wasn’t actually the width the thing took up, your carefully calculated percentages would add up to more than 100%, and bad things would happen.

      Over the last few years, we’ve been handed all of these new tools, Flexbox and Grid give us a layout system designed for CSS. Perhaps less obviously, however, a set of concepts are emerging that give us a real way to explain CSS layout for the first time. There has been something of a refactoring of the language, turning it from a collection of hacks into something that can be taught as a cohesive system. We can start with normal flow and move onto what it is to change the value of display because it is here that all of our new layout capabilities live. We can share how display controls two things – the outer value of block and inline and the inner formatting context – which might be grid, or flex, or normal flow.

      Explaining Writing Modes early on is vital. Not because our beginner is going to need to format a vertical script, or even use vertical writing creatively immediately. It matters because writing modes explain why we talk about start and end, and the block and inline dimensions rather than the physical top, right, bottom and left corners of their screen. Understanding these things makes alignment in grid and flexbox and the line-based positioning in grid much easier to understand. The Box Model can then drop back to a brief explanation of the fact that width and height (or inline-size and block-size) relate to the content-box and we can change it to relate to the border-box with the box-sizing property. In a world where we aren’t giving things a size and pushing them around, the Box Model becomes just part of our discussion on Box Sizing, which includes the intrinsic sizing far more useful when working with flexbox and grid.

      Finally we need to focus on the idea of Conditional CSS. Media Queries and Feature Queries mean we can test the environment of our user using metrics such as their viewport size, whether they are using a pointing device or a touchscreen, and the capabilities of their browser. We can never be sure how our websites are going to be encountered, but we increasingly have the ability in CSS to optimize for the environment once we are there. One of the greatest skills we can give to the person beginning their journey as a web developer is an understanding of this truth. The person visiting your site might have a touchscreen, they might be using a screen reader, they may be on a small-screen device, and they might be on IE11. In all of these cases, there are things you want to do that will not work in their situation, your job is to deal with it and CSS has given you the tools to do so.

      As I started my CSS layout journey with a backdrop of people complaining about Netscape 4, I now continue against a backdrop of people whining about IE11. As our industry grows up, I would love to see us leaving these complaints behind. I think that this starts with us teaching CSS as a robust language, one which has been designed to allow us to present information to multiple environments, to many different people, via a sea of ever-changing devices.


      Look in the dictionary and as a noun, the word ‘design’ is defined as either “a pattern used to decorate something” or “a drawing or set of drawings showing how a building or product is to be made and how it will work and look.”

      But when we think about design, we often assume it is something that will enhance a product by making it stand out – like a pattern – somehow conferring an extra level of beauty on top of its function.

      We’re here to listen and guide you

      Get a personalised report for your creative project, with expert recommendations to make you successful.

      Get free advice →

      And yet good design – nay great design – when it is defined as the way in which something looks and works (i.e. a user interface) can itself be invisible.

      Sounds like a contradiction?

      Think about a design you love…

      Stop for a minute and think about a design that you love. Something that brings a warm glow to your heart as you use it.

      You might immediately think of an iPhone – especially if you’re one of Apple’s millions of fans, loyal to the brand and its constantly adapting ability to make complex functionality intuitively easy to use. 

      You might prefer something like the Alessi ‘Anna G.’ corkscrew. A winning combination of beautiful design (those clever arms!) and simple functionality that allows you to effortlessly open a bottle of wine.

      Or perhaps your new love is a carbon road bike with Shimano Dura-Ace Di2 electronic gearing, a revolutionary way to change gear faster and with more accuracy.

      Whatever it is that you love, the chances are that you love the way in which that product works (smoothly, quickly, effortlessly, efficiently etc.) rather than how it has been designed (with complex coding, detailed algorithms or cutting edge electronic engineering).

      Which is a good example of how great design, or a great user interface, when done well, can be much less visible to the user, compared with the pleasure that using the product itself brings. 

      Design has to work to be effective 

      Chocolate teapot. Chocolate fire guard. Chocolate kettle. 

      There’s a reason these phrases are terms of abuse. They’re useless. And not fit for purpose. Sure, you could make a chocolate teapot look incredible. But at the end of the day, it will never hold hot water long enough to be accurately described as a teapot.

      And so it is with the best design. Because whenever you think of a product that brings you joy as you use it, there is little doubt that if it didn’t function for the purpose for which is was designed, then you would more than likely throw it away.

      A ship can be as luxurious as you like, but it has to function first and foremost as a ship. As the passengers of the Titantic tragically learned to their cost. 

      And so a Facebook app must allow you to post your picture. A transactional website must allow you to make a purchase. Or you will go elsewhere. 

      Great design is evolutionary 

      Effective design often follows a predictable lifecycle. A new product arrives on the market, with ground-breaking functionality. It is then quickly copied and the race begins to differentiate the offering, usually by adding even more features, at the expense of the user experience. 

      Are you thinking of making something?

      Get a personalised report for your creative project, with expert recommendations to make you successful.

      Get expert advice →

      The winners of this race are then those brands that design a product that incorporates all the features but in a seemingly effortless user interface. Think of how, for example, the Apple IPod – a deliberately late entrant to the market – came to dominate the MP3 market.

      Take another example, the evolution of the mobile phone. The first phones were barely mobile. They resembled a brick, with a huge aerial. But they did the job: they allowed you to make phone calls without having to be hooked up to a landline.

      After that, mobile phones evolved to become smaller and smaller – until they became almost unusable. Couple that with the ever increasing technology being added to them and something strange began to happen. Mobile phones started to grow in size as they focused less on making calls and more on becoming hand held computers, with huge processing power, that we now take for granted.

      As consumers, we are no longer happy to play Snake on the tube home. We want full connectivity wherever we go, with the ability to send emails, take high res photos, surf the net – and make calls 24/7. And it’s great design that allows that to happen without you noticing. 

      But make no mistake. That design is incredibly well thought out. 

      When keypads with actual buttons (which took up a lot of space – remember those Blackberrys?) were replaced with on-screen virtual keypads that appear and disappear as you need them, that was a huge step forward for design and the overall user experience. 

      As consumers, we didn’t care how it was done – just that it worked more effectively and intuitively than ever before.

      The best design is constantly changing 

      Not only is the best design often invisible, it is constantly evolving. 

      Take, for example, online banking. 30 years ago, every interaction you had with your finances had to be done in store. Joining long queues to speak to a teller behind toughed glass, as you revealed your business to everyone else standing there listening! 

      Now there’s very little you can’t do on your mobile app. Not only managing your existing accounts but also opening new ones, negotiating better terms, transferring money etc. 

      And that’s against a backdrop of banks having to constantly update their security to protect against fraud. 

      And yet the user interface keeps getting simpler and easier to use, despite these challenges. Fingerprint recognition to sign in. Automated phone verification to confirm a big transaction. The user interface is ever changing but the end result is that it continues to delight. 

      How to create great design that your users will love

      Irrespective of whether you are building a brand new website or wanting to launch a mobile app, the importance of well thought out design matters. 

      UX Planet, on what makes great design, says “Designing the User Experience is more important than the look of your app. Even for a developer a good UX should be your primary goal of your app. Otherwise less people will use your app and therefore your innovation.”

      How you create great design depends on your offering. But the steps to putting your user interface in the backseat remain broadly the same: 

      1. look for pain points you can fix behind the scenes
      2. look for opportunities to automate 
      3. make use of the advances in machine learning and recommendation engines to be more intelligent about what you put in front of users

      And remember, user experience (UX) lies at the heart of the very best designs

      Looking to create great design? 

      If you would like to work with a design driven digital team that understands the importance of a well thought out user interface, then please get in touch with

      We are a small and friendly team with a huge wealth of expertise in: 

      • App development
      • Web development 
      • UX design
      • Digital strategy
      • Responsive web design
      • Graphic design
      • Video animation 
      • Content
      • Marketing support 

      And we would love to help you with the design and build of your next product. 

      Whatever your requirements, we would be more than happy to talk you through the creative processes of any of our services and products.    

      Illustration of a Password Entry Field

      Like a lot of urgent advice, this terrific article about best practices in creating strong passwords, written by Jon Xavier of Fleetsmith, feels both necessary and tragic. Necessary because, as the article says, there is “so much outdated, misleading, and just plain wrong information” about how to create and maintain passwords out there. And tragic because this most basic of security measures, which few of us have ever really mastered, seems likely to continue to challenge most users of digital products for the foreseeable future. It’s well worth reading the article in full, but a quick rundown of its main takeaways is also worthwhile:

      • Passwords should be a minimum of ten characters long, and ideally as long as possible
      • Neither special characters nor numbers are necessary in order to make passwords stronger
      • Cleverly swapping numbers for letters in your passwords is completely ineffective
      • A password should only be changed when there’s reason to believe it’s been compromised
      • The same password should never be used on multiple sites
      • Two-factor authorization should always be turned on if it’s available
      • Never give honest answers to password security questions, e.g., What’s your mother’s maiden name?

      Xavier goes deep into the myths driving password implementations and usage today, but one thing he doesn’t touch on is how poor is the user experience of passwords across platforms and products. Create six different accounts at six different web sites and you’ll very likely encounter six different approaches to encouraging and enforcing password strength and security, some egregiously lax and others excessively restrictive.

      That inconsistency alone undermines much of the vigilance that otherwise responsible users might bring to password creation. If you’re presented with a new set of rules to comply with each time you undertake the same essentially diversionary task (no one sets up a password as an end in itself; they only do so as a means towards achieving some other goal), your devotion to security will inevitably be worn down to a lowest common denominator approach.

      Perhaps the most important advice that Xavier offers is:

      The most important factor in password safety is how they’re stored.

      If you’ve used password managers like 1Password, my personal favorite, you’ll likely nod your head in agreement here. Not only do password managers markedly improve password safety, but they also ameliorate the experience of using them to begin with. Once you’re up and running with 1Password or similar apps, your online activity feels inherently more secure.

      Still, huge gaps remain. For example, when generating a new password, these utilities can only guess at the security requirements of a given site or app. So if the constraint mandated by the site is, say, twelve characters including at least one number, and the password manager happens to be set to generate strings of random words that are thirty-two characters or longer, it’s up to the user to mediate the disparity.

      It’s also difficult for a password manager to understand when a password is applicable to more than one site or app. Once a password is created, it’s often matched exclusively to the domain of that site. So if your login is also valid on a closely related site, as is the case with many sites from large companies, the password manager won’t automatically recognize the relationship and present the relevant login.

      What’s more, grasping the added complexity of a third-party piece of software can be a challenge to many new users. To use one of these utilities effectively, you have to adopt it as a habit, which can take time. That can result in an intermediary phase where some passwords are stored in the manager and others are stored elsewhere using other means, confusing novice users even more.

      Of course you could use the password utilities now built into many operating systems or browsers, which lately have been improved significantly. But if you go this route then you’re sort of stuck there, because accessing those passwords across the wide chasm between products or platforms is high friction at best.

      On the desktop, while it’s easy enough to generate and access passwords in a browser, doing the same for native apps, much less the operating system’s own password prompts, is less straightforward. Manual copy and pasting is often the only way to bridge that divide. And this is especially true on iOS and iPadOS. While password access has improved greatly over the years on mobile, the landscape is still pretty unpredictable. Generate a new password in one app, and you may or may not be able to access it easily on the corresponding web site.

      And none of this takes into account the excessive frequency with which all kinds of products and platforms prompt users to type their passwords, often without making it completely evident why password entry might be necessary at a given moment.

      Compare the experience of passwords with the experience of, say, accessing your email, using a web browser, or performing any kind of search. Regardless of your host or client or even platform, these conform to more or less the same patterns of experience. That’s because like passwords, they’re decentralized pieces of technological infrastructure, but unlike passwords they’ve benefitted from an accretion of best practices that over time have have evolved into more or less universal standards. By contrast passwords still seem immature, developmentally arrested by efficacy myths, and suffering from continual UX neglect. Passwords are clearly a user experience problem starving for design attention.


      The National Retail Foundation (NRF) reports this year’s online holiday sales are expected to increase between 11% and 14% over last year, bringing in between $162.6 billion and $166.9 billion — up more than $16 billion over 2018’s holiday e-commerce revenue in the U.S.

      The NRF’s holiday predictions are a bit more generous than Adobe’s, which anticipates this holiday season will bring in $143 billion in online sales.

      Since releasing its holiday forecasts in October, the NRF has published a consumer survey of more than 7,000 consumers that found 165 million people plan to shop during the busiest shopping week of the year — Thanksgiving Day through Cyber Monday. Of those 165 million, 68.7 million said they will be going online to shop during Cyber Monday.

      Younger shoppers lead the pack for Thanksgiving week shopping. Overall, 69% of the survey respondents said they plan to shop during the five-day stretch between Thanksgiving and Cyber Monday, but that number increases sharply when looking at younger consumers. For shoppers age 18- to 24-years-old, that number goes up to 88% — and up to 84% when looking at the 25 to 34 age group.

      Any retailers and e-commerce brands focused on younger audiences should take this into account when planning their Black Friday/Cyber Monday strategIes.

      Shopping in-store versus online. The NRF reports that there is almost an even split between consumers who plan to start their shopping in-store (47%) compared to those who will start their holiday shopping online (41%).

      “Those under 25 are even more likely to say they expect to start shopping in-store,” said the NRF, with 52% of younger shoppers reporting they’ll start their shopping at a physical retail location. Even with more consumers starting their holiday shopping in-store, brands need to be thinking about their online promotions and email marketing leading up to next week’s shopping surge — building awareness now is a proactive way to capture shoppers both in-store and online.

      A good deal is the biggest motivator. The NRF asked survey participants their top reasons for shopping during the holiday week. Sixty-five percent (the largest majority) said, “Deals were too good to pass up.” Other reasons for shopping Thanksgiving week included: it’s a tradition, something to do over the holiday week, or a group activity to do with friends and family (although none were as popular as the offer of a good deal).

      Why we should care. This year’s holiday season is much shorter than usual, with six fewer days between Thanksgiving and Christmas Day. Brands and e-commerce retailers that bank on the holiday season to make their annual numbers must enter the holidays armed with as much information as possible to plan a streamlined and succinct holiday strategy. Understanding consumer shopping patterns — and forecasted revenue — is key to implementing effective campaigns.

      About The Author

      Amy Gesenhues is a senior editor for Third Door Media, covering the latest news and updates for Marketing Land, Search Engine Land and MarTech Today. From 2009 to 2012, she was an award-winning syndicated columnist for a number of daily newspapers from New York to Texas. With more than ten years of marketing management experience, she has contributed to a variety of traditional and online publications, including MarketingProfs, SoftwareCEO, and Sales and Marketing Management Magazine. Read more of Amy’s articles.


      Customer engagement company Airship launched Airship Journeys, a new interface designed for marketers to drive simple, effective cross-channel engagements. Marketing users can use data models alongside Airship’s real-time mobile data, predictive AI and machine learning to launch cross-channel campaign activities. The low-code solution is expected to address the challenges of using drag-and-drop journey builder tools in Airship’s open platform, allowing marketers to reach customers across a number of channels from a single console.

      Why we should care

      As consumers
      continue to adopt more digital channels, their behavior is also evolving to
      become more selective. Consumer expectations are increasingly raising the bar
      for hyper-personalized messaging.

      “Legacy visual builders were created for an email- and desktop-centric world with far fewer channels and little concern for where a customer was or what they were doing,” said Brett Caine, CEO and president, Airship. “They’ve become complex and bloated in failed attempts to adapt to the new era where the customer experience is all about mobile and in-the-moment responsiveness with many more channels and a much greater expectation that interactions are personalized and contextually relevant.”

      Airship Journeys incorporates the management of digital channels into one view for marketers. Messages can be composed once and deployed across different channels. Airship Journeys also monitors and controls message frequency and uses Predictive AI to segment audiences and re-engage past customers. The solution can increase efficiencies for marketing users by allowing them to streamline their content creation and execution.

      More on the news:

      • A November 2019 Airship survey of marketers found that over 40% deemed their efforts to manage multichannel digital communications is unsuccessful.
      • Airship Journeys is in early use by brands including The Allstate Corporation,, GameStop, JCPenny, Kohl’s, NBA International and TGI Fridays.
      • “Airship Journeys allows us to make quick decisions and adapt critical journeys on the fly, from driving feature adoption and app store ratings, to increasing transaction frequency at key moments in the lifecycle, all while taking full advantage of our own predictive models and historical data,” said Sam Kirwan, app marketing specialist, “The massive benefit is that Journeys has freed up our time, allowing us to examine cohorts at different life stages and apply more testing as we revamp our lifecycle marketing holistically — no small feat considering our customers span more than 140 countries.”

      About The Author

      Jennifer Videtta Cannon serves as Third Door Media’s Senior Editor, covering topics from email marketing and analytics to CRM and project management. With over a decade of organizational digital marketing experience, she has overseen digital marketing operations for NHL franchises and held roles at tech companies including Salesforce, advising enterprise marketers on maximizing their martech capabilities. Jennifer formerly organized the Inbound Marketing Summit and holds a certificate in Digital Marketing Analytics from MIT Sloan School of Management.


      216 Most Popular Web Safe PNG Colors

      We have analyzed all of our 16 million PNG images, converted every single pixel (about 19 trillion in total) to a web safe HEX value & stored that data in our database. Below you can find a list of the 216 web safe colors ranked by popularity. Simply click on a color to browse images which contain that color.


      Mostly desaturated dark red


      Slightly desaturated red


      Very dark desaturated red


      Mostly desaturated dark yellow


      Slightly desaturated yellow


      Dark moderate orange


      Very dark desaturated yellow


      Mostly desaturated dark cyan


      Dark moderate red


      Very pale red [Pink tone]


      Mostly desaturated dark blue


      Very dark desaturated cyan


      Very dark desaturated blue


      Slightly desaturated blue


      Slightly desaturated cyan


      Very light orange


      Very dark yellow [Olive tone]


      Dark moderate blue


      Dark moderate yellow


      Mostly desaturated dark lime green


      Very dark orange [Brown tone]


      Very dark desaturated lime green


      Mostly desaturated dark magenta


      Slightly desaturated lime green


      Dark orange [Brown tone]


      Dark moderate green


      Very dark desaturated magenta


      Dark moderate cyan


      Pure (or mostly pure) yellow


      Slightly desaturated magenta


      Dark moderate pink


      Dark orange [Brown tone]


      Dark moderate blue


      Dark moderate lime green


      Very dark magenta


      Pure (or mostly pure) red


      Dark moderate cyan – lime green


      Pure (or mostly pure) orange


      Moderate cyan – lime green


      Very dark yellow [Olive tone]


      Very pale lime green


      Very light yellow


      Moderate lime green


      Very dark lime green


      Dark moderate violet


      Pure (or mostly pure) orange


      Dark yellow [Olive tone]


      Very dark cyan – lime green


      Dark moderate magenta


      Pure (or mostly pure) yellow


      Very pale magenta


      Strong cyan – lime green


      Dark cyan – lime green


      Pure (or mostly pure) red


      Pure (or mostly pure) blue


      Strong lime green


      Strong cyan – lime green


      Dark cyan – lime green


      Very dark lime green


      Pure (or mostly pure) cyan


      Pure (or mostly pure) blue


      Pure (or mostly pure) red


      Pure (or mostly pure) blue


      Very dark magenta


      Very light violet


      Strong lime green


      Strong cyan – lime green


      Pure (or mostly pure) pink


      Pure (or mostly pure) lime green


      Very light cyan – lime green


      Pure (or mostly pure) pink


      Strong cyan – lime green


      Pure (or mostly pure) cyan


      Very light lime green


      Pure (or mostly pure) blue


      Very light magenta


      Pure (or mostly pure) yellow


      Light cyan – lime green


      Pure (or mostly pure) magenta


      Light cyan – lime green


      Pure (or mostly pure) lime green


      Pure (or mostly pure) green


      Pure (or mostly pure) magenta


      Pure (or mostly pure) green


      Pure (or mostly pure) cyan


      Pure (or mostly pure) violet


      Pure (or mostly pure) blue


      Vivid cyan – lime green


      Pure (or mostly pure) lime green


      Pure (or mostly pure) magenta


      Vivid cyan – lime green


      Pure (or mostly pure) cyan – lime green


      Pure (or mostly pure) violet


      Pure (or mostly pure) cyan – lime green


      New Logo and Identity for Warner Bros. by Pentagram



      Established in 1923, Warner Bros. is one of the most well-known entertainment companies with the creation, production, distribution, licensing, and marketing of content across feature films, television, home entertainment production, animation, comic books, video games, product and brand licensing, and broadcasting. Its library consists of more than 100,000 hours of programming that include over 8,600 feature films and 5,000 television programs. Among its most prized properties are the DC universe, Harry Potter, Lord of the Rings, Friends, and The Big Bang Theory. Owned by WarnerMedia (which is owned by AT&T), Warner Bros. employs between 5,000 and 10,000 people depending on what’s in production and its picture division had over $5.57 billion in worldwide receipts in 2018. Last week, Warner Bros. introduced a new identity designed by New York, NY-based Pentagram partner Emily Oberman.

      The Warner Bros. shield is one of the most iconic logos in the world, visual shorthand for entertainment recognized around the globe. […] Warner Bros. wanted to build on this legacy and make the shield more functional and effective. The previous iteration, introduced in 1993, was highly detailed and hard to use at a small scale and in digital contexts, which are increasingly important.

      The update streamlines the logo to its key elements, returning the shield and monogram to prominence and losing the sash. The redesign refines the shield with a form based on the classical proportions of the golden ratio. The designers looked at the construction of the letterforms of the “WB” monogram, preserving their quirkiness but making them more modern. The letters of the monogram align as though made in one continuous gesture, emphasizing unity and connection.

      Pentagram project page

      Before-to-after animation.
      New Logo and Identity for Warner Bros. by Pentagram
      Golden ratio: Yes? No?

      The logo has been optimized to perform across various platforms and scales, from the small spaces of the digital world to giant installations like the iconic water tower on the Warner Bros. studio lot. It also works well with a wide range of content. The logo appears in the signature Warner Bros. blue, which has been brightened to a more contemporary hue, with the wordmark set off in a slightly darker shade to create a complementary contrast.

      Pentagram project page

      New Logo and Identity for Warner Bros. by Pentagram

      The team also created a dimensional version of the logo, to be used exclusively for on-screen content and special cases. The dimensional mark has the clean, streamlined look of the new logo, but with a depth that hints at the content experience. The logo can be customized for the opening and closing moments of individual movies and shows. It can also function as a window for imagery and sequences, using the edge of the shield as a frame.

      Pentagram project page

      New Logo and Identity for Warner Bros. by Pentagram
      Flat and shaded versions.
      New Logo and Identity for Warner Bros. by Pentagram
      Monogram detail.

      I hadn’t realized how much I do not like the old logo — obviously it’s a classic but I think that that is thanks to its repetition (and association to entertaining entertainment) not its merits as a piece of graphic design. The gradients, the ring, the flimsy serif for the name, and the color combination were all pretty garish. Underneath all of that is the one good thing about this logo, which is the “WB” lettering and the new logo effectively brings that to the fore by stripping it away from all of the effects that have accrued over the years and creating a more interesting proportion for the shield which was awkwardly wide before. Golden ratio malarkey aside, the taller design looks so much better with the lettering and it also better accentuates the letters’ peculiarities. As much as I like the flat version I think in this case the shaded version might be better as an evolution of the logo so many people have grown accustomed to — but, certainly, having the flat version is very beneficial as a starting point for all the movie customizations that are so popular nowadays. The new wordmark is quite lovely and I really like the blue tone on tone approach.

      Logo usage variations.

      Logo style variations.
      New Logo and Identity for Warner Bros. by Pentagram
      Logo as window.

      The simplified shield can take on a number of different styles much more efficiently than the old logo and can do so across any medium, from movies to TV to print. The shield as window (above)… a little trite but undoubtedly efficient.

      The distinctive monogram has been expanded into a custom typeface, Warner Bros. Condensed Bold, used for the wordmarks of the various divisions and other display typography. Designed by Pentagram and expanded into a full family of fonts by Jeremy Mickel, the typeface has a look and feel that is uniquely Warner Bros., with condensed letterforms that relate to the elongated “WB” in the shield. Details in the logo’s letterforms are echoed in the font; for instance, the curvature of the “R” references the redrawn “B.” Like the redrawn logo, the typeface carries a sense of the company’s history, but is clean, modern and timeless.

      Pentagram project page

      Custom type family by MCKL.

      I LOVE the custom type family. It has such a great balance of corporate-ness and fun-ness that is very hard to achieve. It’s like a comic book version of Interstate and, I dunno, I just think it’s working on all fronts.

      New Logo and Identity for Warner Bros. by Pentagram
      Business cards.
      New Logo and Identity for Warner Bros. by Pentagram
      New Logo and Identity for Warner Bros. by Pentagram
      New Logo and Identity for Warner Bros. by Pentagram
      Social media accounts.

      Not much in terms of application but the few institutional materials shown are quite nice. Nothing super extraordinary or fun but very lively with the use of the bright blue and the single-color new logo.

      New Logo and Identity for Warner Bros. by Pentagram
      Warner Bros.’ iconic water tower.

      Overall, this is a great evolution that makes the new logo more easily adaptable to the different content while giving the iconic “WB” lettering a shot at lasting another 100 years, even if someone puts a ring around it again — which is bound to happen when some future designer 30 years from now thinks that that nostalgic approach was the bomb.

      See what else happened on Brand New each year since publication began in 2006

      Logo Before & After
      Sample Application

      Spotted Around the web

      Pinned Recent, Big Stories

      Curated That Good Hand-drawn Feel


      google it —

      Google is sometimes hands-on under the hood, and investigators want to know more.

      A large Google sign seen on a window of Google's headquarters.

      Enlarge / Mountain View, Calif.—May 21, 2018: Exterior view of a Googleplex building, the corporate headquarters of Google and parent company Alphabet.

      Google, and its parent company Alphabet, has its metaphorical fingers in a hundred different lucrative pies. To untold millions of users, though, “to Google” something has become a synonym for “search,” the company’s original business—a business that is now under investigation as more details about its inner workings come to light.

      A coalition of attorneys general investigating Google’s practices is expanding its probe to include the company’s search business, CNBC reports while citing people familiar with the matter.

      Attorneys general for almost every state teamed up in September to launch a joint antitrust probe into Google. The investigation is being led by Texas Attorney General Ken Paxton, who said last month that the probe would first focus on the company’s advertising business, which continues to dominate the online advertising sector.

      Paxton said at the time, however, that he’d willingly take the investigation in new directions if circumstances called for it, telling the Washington Post, “If we end up learning things that lead us in other directions, we’ll certainly bring those back to the states and talk about whether we expand into other areas.”

      Why search?

      Google’s decades-long dominance in the search market may not be quite as organic as the company has alluded, according to The Wall Street Journal, which published a lengthy report today delving into the way Google’s black-box search process actually works.

      Google’s increasingly hands-on approach to search results, which has taken a sharp upturn since 2016, “marks a shift from its founding philosophy of ‘organizing the world’s information’ to one that is far more active in deciding how that information should appear,” the WSJ writes.

      Some of that manipulation comes from very human hands, sources told the paper in more than 100 interviews. Employees and contractors have “evaluated” search results for effectiveness and quality, among other factors, and promoted certain results to the top of the virtual heap as a result.

      One former contractor the WSJ spoke with described down-voting any search results that read like a “how-to manual” for queries relating to suicide until the National Suicide Prevention Lifeline came up as the top result. According to the contractor, Google soon after put out a message to the contracting firm that the Lifeline should be marked as the top result for all searches relating to suicide so that the company algorithms would adjust to consider it the top result.

      Or in another instance, sources told the WSJ, employees made a conscious choice for how to handle anti-vax messaging:

      One of the first hot-button issues surfaced in 2015, according to people familiar with the matter, when some employees complained that a search for “how do vaccines cause autism” delivered misinformation through sites that oppose vaccinations.

      At least one employee defended the result, writing that Google should “let the algorithms decide” what shows up, according to one person familiar with the matter. Instead, the people said, Google made a change so that the first result is a site called—which states on its home page in large black letters, “They f—ing don’t.” (The phrase has become a meme within Google.)

      The algorithms governing Google’s auto-complete and suggestion functions are also heavily subject to review, the sources said. Google says publicly it doesn’t allow for predictions related to “harassment, bullying, threats, inappropriate sexualization, or predictions that expose private or sensitive information,” and that policy’s not new. The engineer who created the auto-complete function in 2004 gave an example using Britney Spears, who at the time was making more headlines for her marriages than for her music.

      The engineer “didn’t want a piece of human anatomy or the description of a sex act to appear when someone started typing the singer’s name,” as the paper describes it. The unfiltered search results were “kind of horrible,” he added.

      The company has since maintained an internal blacklist of terms that are not allowed to appear in autocomplete, organic search, or Google News, the sources told the WSJ, even though company leadership has said publicly, including to Congress, that the company does not use blacklists or whitelists to influence its results.

      The modern blacklist reportedly includes not only spam sites, which get de-indexed from search, but also the type of misinformation sites that are endemic to Facebook (or, for that matter, Google’s own YouTube).

      Why antitrust?

      Google relying on human intervention, and endless tweaks to its algorithms as the WSJ describes, isn’t an antitrust violation. When it uses its trove of data from one operation to make choices that may harm competitors to its other operations, though, that can draw attention.

      All that human intervention and algorithmic tweaking also affects advertising and business results, according to the WSJ. Those tweaks “favor big businesses over smaller ones,” the paper writes, “contrary to [Google’s] public position that it never takes that type of action.”

      The largest advertisers, including eBay, have received “direct advice” on how to improve their search results after seeing traffic from organic search drop, sources told the paper. Smaller businesses, however, have not been so lucky, being left instead to try to figure out the systems either bringing them traffic or denying them traffic on their own.

      Links to Google’s own features and properties also take up an increasingly large percentage of the search results page, the WSJ notes. For example, if you search for one of today’s chart-toppers, such as Beyoncé, you’re greeted with three large Google modules that take up more than half the screen real estate:

      Most of the results on the page are Google modules (highlighted in red).

      Enlarge / Most of the results on the page are Google modules (highlighted in red).

      More than half of Google searches are now reportedly “no-click” searches, where individuals look only at the page of results and use the snippets on it rather than clicking through to any of the sources from which Google is drawing that information. That kind of use of data, among others, could be considered harmful to competition, since the company is using data collected from competitors to keep users from going to those competitors.

      Google, for its part, disputed the WSJ’s findings throughout, telling the paper, “We do today what we have done all along, provide relevant results from the most reliable sources available.”


      Information foraging is the most important concept to emerge from human-computer interaction research since 1993. Developed at the Palo Alto Research Center (previously Xerox PARC) by Stuart Card, Peter Pirolli, and colleagues, information foraging uses the analogy of wild animals gathering food to analyze how humans collect information online.

      To say that web users behave like wild beasts in the jungle sounds like a joke, but there’s substantial data to support this claim. Animals make decisions on where, when, and how to eat on the basis of highly optimized formulas. Not that critters run mathematical computations, but rather that suboptimal behaviors result in starvation, and thus there are fewer offspring that follow those behaviors in subsequent generations. After thousands of generations, optimal food-gathering behavior is all that’s left.

      Humans are under less evolutionary pressure to improve their web use, but basic laziness is a human characteristic that might be survival-related (don’t exert yourself unless you have to). In any case, people like to get maximum benefit for minimum effort. That’s what makes information foraging a useful tool for analyzing online media.

      Information Scent: Predicting a Path’s Success

      Information foraging’s most famous concept is information scent: users estimate a given hunt’s likely success from the spoor: assessing whether their path exhibits cues related to the desired outcome. Informavores will keep clicking as long as they sense (to mix metaphors) that they’re “getting warmer” — the scent must keep getting stronger and stronger, or people give up. Progress must seem rapid enough to be worth the predicted effort required to reach the destination.

      The most obvious design lesson from information scent is to ensure that links and category descriptions explicitly describe what users will find at the destination. Faced with several navigation options, it’s best if users can clearly identify the trail to the prey and see that other trails are devoid of anything edible.

      Don’t use made-up words or your own slogans as navigation options, since they don’t have the scent of the sought-after item. Plain language also works best for search engine visibility: searching provides a literal match between the words in the user’s mind and the words on your site.

      Secondly, as users drill down the site, each page should clearly indicate that they’re still on the path to the food. In other words, provide feedback about the current location and how it relates to users’ tasks.

      Diet Selection: What to Eat

      A fox lives in a forest with two kinds of rabbits: big ones and small ones. Which should it eat? The answer is not always “the big rabbits.”

      Whether to eat big or small depends on how easy a rabbit is to catch. If big rabbits are very difficult to catch, the fox is better off letting them go and concentrating exclusively on hunting and eating small ones. If the fox sees a big rabbit, it should let it pass: the probability of a catch is too low to justify the energy consumed by the hunt.

      The big difference between websites and rabbits is that websites want to be caught. So how can you design a site to make your content attractive to ravenous beasts?

      The two main strategies are to make your content look like a nutritious meal and signal that it’s an easy catch. These strategies must be used in combination: users will leave if the content is good but hard to find, or if it’s easy to find but offers only empty calories.

      This dual strategy is the reason I recommend that you showcase sample content on the homepage (appear nutritious) and prominently display navigation and search features (demonstrate that users can easily find what they’re looking for). Diet selection also supports the traditional advice against splash screens and vacuous content. These elements convey to users that they’re in for a tedious ordeal that serves up only scrawny rodents as rewards.

      Patch Leaving: When to Hunt Elsewhere

      Patchy environments often feature several different areas where game congregate. So where should predators hunt? In whatever patch has the most prey, of course. But after they’ve eaten some of that game, then what? Continue to hunt in the same patch, or move to another one? The answer depends on how far is it to the next patch.

      If getting to the next patch is easy, predators are better off moving on. No need to deplete all the game in the current patch; once their next morsel becomes a bit difficult to find, they can move to richer hunting grounds. On the other hand, if it’s difficult to move (say, if they have to cross a river), they’re likely to hunt each patch more extensively before going to the next one.

      On the web, each site is a patch, and each site’s information is its tasty venison.

      Moving between sites has always been easy. But, from an information foraging perspective, it used to be best if users stayed put because the vast majority of websites were horrible and the probability that the next site would be any good was extremely low. I thus advised early website designers to follow two design strategies:

      • First, convince users that the site is worthy of their attention. As I described above, this means having good information and making it easy to find.
      • Second, once they arrive, make it easy for users to find even more good stuff so that they stay rather than go elsewhere. An entire movement was devoted to the idea of sticky sites and extended visits.

      In the last few years, Google has reversed this equation by emphasizing quality in its sorting of search results. It is now extremely easy for users to find other good sites.

      Information foraging predicts that the easier it is to find good patches, the quicker users will leave a patch. Thus, the better search engines get at highlighting quality sites, the less time users will spend on any one site.

      The growth of always-on broadband connections also encourages this trend toward shorter visits. With dial-up, connecting to the internet is somewhat difficult, and users mainly do it in big time chunks. In contrast, always-on connections encourage information snacking, where users go online briefly, looking for quick answers. The upside is that users will visit more frequently, since they have more sessions, will find you more often, and will leave other sites faster.

      The patch-leaving model thus predicts that visits will become ever shorter. Google and always-on connections have changed the most fruitful design strategy to one with three components:

      Better intra-site navigation and better site maps may tip the balance slightly back in favor of longer stays, but it’s safest to assume that users’ visits to any individual website will become ever shorter.

      Informavore Navigation Behavior

      Information foraging presents many interesting metaphors and mathematical models for analyzing user behavior. The most important concept is simply that of cost-benefit analysis for navigation, where users have to make tradeoffs based on two questions:

      • What gain can I expect from a specific information nugget (such as a web page)?
      • What is the likely cost to discover and consume that information? (Cost is typically measured in time and effort, though it could include a monetary component in a micropayment system.)

      Both questions involve estimates, which users can make either from experience or from design cues. Website designers can thus influence the analysis by designing to enhance user expectations of gains and reduce their expectations of costs. Ultimately, of course, what the site actually delivers is more important, but you’ll never get experienced repeat visitors unless their first encounter is fruitful.

      As the patch-leaving model demonstrates, users optimize cost-benefit relative to personal criteria and within a system that’s larger than any single website. In addition to the detailed insights offered by individual models, it’s healthy to remember that users are selfish, lazy, and ruthless in applying their cost-benefit analyses.


      Like most 80s kids, my first gaming system was an NES. But I became quite the rebel when I purchased my second system, a Sega Genesis. The console wars were in full force, and I was one of only a handful at my school to join the dark side of Sega, as opposed to the much more popular Super Nintendo. The Sega Genesis was my first big purchase as a kid, and I mowed yards all summer long (along with selling my NES in a garage sale) to place my $189.99 order from the Sears Catalog. I’m not much of a gamer nowadays (4 kids will do that to you), but I still have my old Sega Genesis stored away, just waiting to be re-discovered.

      This list covers the second (1976) through eighth (present) generation consoles. According to Wikipedia, there were 687 first-generation consoles produced, so I decided that was a rabbit hole I didn’t want to enter. I had fun designing the page to look like an old video game ad or one of those posters that came in Nintendo Power. The TV screen borders even made me nostalgic for playing games on an old crappy 19-inch TV. As always, let me know if I missed something.

      • 3DO Interactive Multiplayer


      • Amstrad GX4000


      • APF Imagination Machine


      • APF-MP1000


      • Apple Bandai Pippin


      • Atari 2600


      • Atari 5200


      • Atari 7800


      • Atari XEGS


      • Atari Jaguar


      • Atari Jaguar CD


      • Bandai Super Vision 8000


      • Bandai Playdia


      • Capcom CPS Changer


      • Casio PV-1000


      • Casio Loopy


      • ColecoVision


      • Commodore 64


      • Commodore CDTV


      • Commodore Amiga CD32


      • Daewoo Electronics Zemmix


      • Emerson Arcadia 2001


      • Emerson Leisure Vision


      • Epoch Cassette Vision


      • Epoch Super Cassette Vision


      • Fairchild Channel F


      • Fujitsu FM Towns Marty


      • Funtech Super A’Can


      • Gakken Compact Vision TV Boy


      • Interton VC 4000


      • Konami Picno


      • Konix Multisystem


      • LJN Video Art


      • Magnavox Odyssey2


      • Mattel Intellivision


      • Mattel HyperScan


      • Memorex VIS


      • Midway Bally Astrocade


      • Microsoft Xbox


      • Microsoft Xbox 360


      • Microsoft Xbox One


      • Milton Bradley Vectrex


      • NEC PC Engine


      • NEC TurboGrafx-16


      • NEC Super CD-ROM2


      • NEC SuperGrafx2


      • NEC PC-FX


      • Nichibutsu My Vision


      • Nintendo Entertainment System


      • Nintendo Family Computer (Famicom)


      • Nintendo Super Nintendo


      • Nintendo Super Famicom


      • Nintendo Satellaview


      • Nintendo 64


      • Nintendo 64DD


      • Nintendo GameCube


      • Nintendo iQue Player


      • Nintendo Wii


      • Nintendo Wii U


      • Nintendo Switch


      • Philips Videopac G7400


      • Philips CD-i


      • Pioneer LaserActive


      • Pioneer LaserActive Mega LD


      • RCA Studio II


      • RDI Halcyon


      • Sega SG-1000


      • Sega Mark III


      • Sega Master System


      • Sega Tyco Video Driver


      • Sega Genesis


      • Sega Mega Drive


      • Sega CD


      • Sega Mega CD


      • Sega Mega CD 2


      • Sega Pico


      • Sega 32x


      • Sega Saturn


      • Sega Dreamcast


      • Sega Advanced Pico Beena


      • SNK Neo-Geo AES


      • SNK Neo-Geo CD


      • Sony PlayStation


      • Sony PlayStation 2


      • Sony PlayStation 3


      • Sony PlayStation 4


      • SSD Xavix PORT


      • Takara Video Challenger


      • Tomy Tutor


      • Ultravision Video Arcade System


      • View-Master Interactive Vision


      • VM Labs Nuon


      • VTech CreatiVision


      • VTech Socrates


      • VTech V.Smile


      • VTech V.Flash


      • Worlds of Wonder Action Max


      • ZAPiT Game Wave


      • Zeebo



      Measure and optimize performance and user experience


      User-centric performance metrics are a critical tool in
      understanding and improving the experience of your site in a way that
      benefits real users.

      What you’ll learn

      • Introduction
      • Important metrics to measure
      • Define your own metrics


      Like most of us I have my fair share of ideas that I want to pursue. For many years I had many books, many courses, many projects, etc. that I wanted to pursue but never really getting them done. Recently talking to a friend about a few of my recent projects that I finished or very close to finishing, he mentioned that he has too many ongoing projects to finish anything. This really resonated with me as I had once been the same but over the last few years have slowly improved. So I got inspired to write this post, how I learnt to finish projects.

      I struggled a lot getting things done, I would have too many projects ongoing at the same time all with differing levels of completion and ongoing progress. Over the course of a lot of iterations I managed to come up with the following process.

      1. Write down your project ideas

      Grab a piece of paper or a blank page in your notebook, grab a cup of coffee (or your beverage of choice) and grab your pen or pencil. The next step is to write down any projects that you are working on or idea that you are thinking of. No project is too small, no project is too unreasonable. Put everything on the page. I find this is helpful to get the vague cloud of ideas out of my head.

      2. Transfer to a Kanban board

      From here I like to take those projects and put them onto a Kanban Board, my go to service is Trello. I sort them into different columns representing their state, as below:

      1. Concept – just an idea at this stage
      2. Promising – an idea that I would like to work on sometime in the future
      3. In Progress – projects that are in progress
      4. Complete – the idea has been executed to a point where it is considered done
      5. Rejected – the idea has been rejected as not feasible, lost interest, etc.

      I find this helps me to get a view of where my ideas are as well as a central places where my they sit, so I never lose an idea.

      Kanban Board

      3. Define

      To me, this is the hardest step. Each idea needs to be defined. The goal of the idea is to execute on it but what is it exactly? I like to set SMART Goals to help define what the end looks like. If a project’s goal is too open ended it is easy to just keep putting along and never really feel like you finish anything. This list of goals can be as long as you want but please temper the goals with if you want them or think they will be useful. I find these goals can often help with the project management aspect in setting up tasks.

      4. Prioritize

      Now you have everything down on your kanban it is time to prioritize. Within each column prioritize with the more important/urgent task towards the top. I’ve found this helps with figuring out what to do first.

      5. Constrain

      For those familiar with Kanban will be familiar with WIP Limits (Work in Progress limits), where a column has a limit on how many items can be in each column. For my board I use a limit of 2 on In Progress and 5 on Promising which I have found to work for me. If you have too many tasks in these columns, move each to the previous columns. I would warn against having too liberal of a limit on In Progress, it is very important to limit the amount of concurrent tasks.

      I have found this to be the most important step, granted it takes a lot of self control. For me 2 ongoing tasks is enough to keep me focused but enough flexibility to do something else if I get bored.

      WIP Limit on Kanban Board

      6. Get working

      This is the fun part! Well, depending on how you view it. Grind towards the goals.

      7. Fall off the wagon

      This is going to happen, you are going to fall off the wagon. It happens to me once in a while, don’t worry about it. Maybe your prioritization is wrong. All I can really say, is re-evaluate your idea’s prioritization and the ideas in the In Progress column.

      8. Finish

      Once I move an idea into Complete I like to have a ceremony. For me it usually ends with a blog post (for example CMD Resume, Fit on a Floppy, Multiline JavaScript Converter, Commit Comp) and a nice meal. Choose another task and go back to Step 6!

      ?. Clean up

      Once every few weeks I like to do a clean up on the board. Adding more ideas, rejecting ideas that I change my mind on, moving finished ideas to complete, moving items to in progress, reprioritizing, etc. A Kanban board is pointless if it isn’t up to date.

      I hope you are able to find some use in my process. Like I said, it has been an evolving process which I’ve moulded for my purpose, but your mileage might vary and customization is required.

      Hacker News Discussion


      People who have been following my blog for ages, know that I have a soft spot for these kind of creations as I’ve written this tutorial before on my previous blog. Today, I’m reviving it with a bit of new material…

      For this tutorial I’ll show you how you can create a full spectrum spirograph in only a few simple steps (the 1st one in the row in the image below). Make sure to download the Illustrator .ase file which includes all the color swatches for this tutorial.

      Examples of spirographs explained in this tutorial. We'll be creating the full spectrum one (1st one in the row).

      Examples of spirographs explained in this tutorial. We’ll be creating the full spectrum one (1st one in the row).

      1. Draw a Circle and Duplicate it

      First make sure you have View > Snap to Point and View > Smart Guides enabled. Select the Ellipse tool (located under the Rectangle Tool) and draw a perfect circle. The size doesn’t really matter. Apply the first yellow swatch color as fill and no border color. Now select the Group Selection tool, hold down the Option/Alt Shift key and click & drag the circle to the right until the left anchor point reaches the center point of the original circle.

      Draw a circle and duplicate it

      Draw a circle and duplicate it

      2. Intersect

      Select both circles and choose Intersect option from the Pathfinder. We end up with this simple leaf shape that we will rotate around in the next step, but first we’ll adjust the fill appearance a little bit. Choose Multiply mode and 25% opacity in the Transparent panel. This way we’ll achieve this nice overlapping color effect.

      Go to Pathfinder and choose the Interest option

      Go to Pathfinder and choose the Interest option

      3. Duplicate Rotate at 10%

      Now comes the very simple math part. We have 36 colors in total in our swatches panel, which is 1 color per shape. So we’ll rotate this leaf shape 36 times, which means we’ll rotate it at 10° (360/10).

      With the object still selected, choose the Rotate tool and Option/Alt click into the bottom anchor point. In the window that appears enter 10° as Angle value, and click the Copy button. Apply the 2nd yellow swatch to the copied leaf.

      Duplicate Rotate at 10%

      Duplicate Rotate at 10%

      4. Transform Again & Apply Next Swatch

      Now we need to duplicate this rotation another 34 times in a row. Luckily we can do this pretty fast using the Transform Again shortcut. Hit Cmd/Ctrl D to apply Object > Transform Again. Apply the 3rd color swatch.

      Transform Again & Apply Next Swatch

      Transform Again & Apply Next Swatch

      5. Repeat step 4

      Hit Cmd/Ctrl D again, and apply the 4th color…

      Transform again...

      Transform again…

      6. Keep Repeating

      Keep repeating these steps again, and again…

      Keep on repeating the steps...

      Keep on repeating the steps…

      7. Until Full Spectrum

      Keep repeating until you end up with a full spectrum spirograph.

      Keep on repeating until you end up with the full spectrum

      Keep on repeating until you end up with the full spectrum

      8. Experiment with other shapes, angles & colors

      Now that you understand the basic technique, it’s easy to experiment to achieve different effects and results. When you combine Transform Again with Transform Each (Object > Transform > Transform Each…), you can achieve a very interesting repetitive effect. With Transform Each you can rotate, scale and move an object in one single action. Here are a couple of experiments I tried out:

      Spirograph example 1

      Squared by -10°

      Here’s a very easy one for starters: draw a big square with a 10% opacity fill color in Multiply mode. Go to Object > Transform > Transform Each… Enter 85% for Horizontal and Vertical Scale, Rotate at -10° and and click Copy. Now hit Cmd/Ctrl D 8 times in a row.


      Here I started from a triangular shape that I repeatedly rotated at 18°. I used Transform Each, entered a rotation value of 18° and hit the Copy button. Next, I hit Cmd/Ctrl D 5 times in a row to duplicate this rotation effect 5 more times. Last I applied a different color for each shape using 100% opacity in Screen mode.

      spirograph example 2
      spirograph example 3


      This is a more complex one that I achieved pure out of experimenting, starting from the same triangular shape that I repeatedly rotated, scaled and moved from position using the Transform Each method. The different colors were also added afterwards at 40% opacity in Multiply mode. Try it out with another shape, and try another reference point instead of the centered one to achieve surprising results.


      This is maybe the most complex one of all that took me a lot of experimenting to get to this end result. I started from a circle that I first copy rotated at 120° to end up with 3 circles where I used the bottom center area of the circle as rotation reference point. If you look at the white dotted lines you can see the composition of the tree circles. Then I copy rotated these 3 circles together at 48°.

      spirograph example 4

      Below you’ll find a visual of the different steps to take to complete this spirograph. First create a circle and apply a radial gradient going from white on the inside to aqua blue on the outside and apply 50% opacity. Then copy rotate the circle 2x at 120°. Group the 3 circles, and apply Color Burn as layer mode effect. Next, copy rotate this group at 48°. Apply a pink radial gradient and change the color mode of the group to Multiply. Then copy rotate this group again at 48°, and apply a yellow radial gradient.

      Different steps to take to create the 48° spirograph

      These examples are only just a tip of the iceberg of what’s possible. The key is experimenting and you might be surprised about the result you achieve. I usually don’t have an image in my head of what I want to accomplish as end result, and if I have, it’s just vague. The rest I leave to experimenting until I achieve something interesting to build upon, then I tweak by trying another color, another layer mode, transparency value, rotation angle, or reference point etc.

      Hope you like this tutorial and you get the hang of experimenting into this area of spirographs. In a next tutorial we keep exploring spirographs, but we’ll use another yet powerful tool to achieve similar complex results. So stay tuned!


      It can be difficult and frustrating digging around the web to find the perfect font for a project — only to realize it comes with a formidable price tag or isn’t applicable for commercial use. To help save you some time down the line, we’re sharing our five favorite font sites you can comfortably use without having to worry about cost or usage limitations. In our humble opinion, these are the most reliable places to find free fonts. There are lots of really cool smaller sites that offer amazing and unique fonts, but these sites are the places that serve up lots of downloads — maybe think of them as commodity font sites. They work fast and don’t require you to jump through a lot of hoops to download fonts.

      Once you download a few or a few dozen free fonts, our free online Pixlr Editor app for graphic designers lets you use fonts you’ve installed on your computer. Our Pixlr for Mac/PC app, which some people use to add text to photos, also lets you use fonts you’ve installed on your computer.

      Font Squirrel

      Font Squirrel is the number one best place to find quality, free, commercial use fonts. Unlike some of the other websites on this list, all of the fonts provided on this website are available for commercial use. So you don’t have to worry about adjusting your search options on their site in any shape or form. But, if you do want to expand your options, they offer a small but quality list of “Almost Free” fonts. Font Squirrel likely has the smallest library of fonts from this list, but the quality of the chosen fonts are undeniable. And they are unique. If you have a specific category you’d like to search, choose your options on the right-hand side. The download process is also the most straightforward. Simply click the download button next to the font you’ve chosen, unzip it, and double click the .otf or .ttf file to install. You can probably see why this is our go-to favorite font site. They make it easy for you to find and download fonts. No strings attached. And as if that wasn’t already great enough, this website also has a font identifier where you can upload an image of a font you have that you are trying to match. You can create an account and save favorites if you want.

      What we like best: Well-designed site. The most useful features of any site or service. They even have full lists of glyphs and a way for you to enter text to see how it will look in a particular font face.

      1001 fonts

      1001 fonts has a very large collection of fonts. They also have a specific section of fonts that are for commercial use only. You can access the commercial fonts directly from this link. Don’t be fooled by the direct-marketing style name of the site or its appearance. It’s definitely not as pretty as Font Squirrel, but rest assured quality is still in there. They have a very large and impressive collection of commercial free fonts that are high quality and easy to install. Because of this site’s popularity and lite community features, there are comments on a majority of the fonts available so you can easily take a glance at what people are saying prior to downloading. If you want to look through fonts from the category options, remember to click “Only view fonts that are free for commercial use” before starting to look through the lists. The download process is also quite simple. Simple click download on the right end of the font you like, unzip it, and double click the .otf or .ttf file to install. One thing to note is that the character map on the font web pages do not always accurately represent the characters that come with the font. Some of them show the font as having very few characters but when installed have the full set. This site also has an account option if you plan on becoming a frequent user.

      What we like best: Quantity. There are a lot of fonts on this site.


      The fonts on the free section of TheHungryJPEG offers a lot of variety, styles and themes that suit a range of designs. Often times, these free fonts come with some clip art or graphics that are included in the downloadable packs. Most of these free fonts come with a commercial license, so rest assured that your designs will have the best typefaces, completely free of the legal hassles that come with font usages. Aside from fonts, you can download lots of graphic art, creative clip art and templates for free, which saves time whenever necessary. Check out their range of add ons like Lightroom presets, brushes and textures, and even original color palettes to add a splash of creativity into your projects. They release a new freebie weekly, and also update their free font bundles on the regular. TheHungryJPEG have an assortment of affordable font prices that you probably wouldn’t want to miss out on, especially during seasonal sales like Halloween or Christmas.

      What we like best: Plenty of variety. The fonts available on TheHungryJPEG, whether purchasable or free, are always handpicked curations from independent designers and store-owners. You can’t go wrong here; there’s bound to be a typeface that appeals to you.


      DaFont is a great website to get some free commercial use fonts, but you need to know how to properly navigate it. The site has been around for a long time, and its age and short-sighted site design really shows. Further, there are a lot of possible details to include with fonts, but this site doesn’t always include them. And the font you download might not have a comprehensive collection of glyphs. Then again, you’ll find all kinds of crazy dingbat and symbol font collections on here. If you’re going to search for free commercial use fonts on this site, it’s probably best to start with a category. Here are three steps to get you on the right path:

      • Search a category
      • Select more options (Note “More options” drop-down)
      • Check off both “free” and “public domain”

      It’s very easy to download fonts from this site once you get the search functions right. Simply click download next to the font you’d like to use. Unzip and double click the .otf or .ttf file to install.

      What we like best: Quirky. This is *the place* for oddball, niche fonts. Need some Elvish runes for your Middle Earth inspired party flyer? They’ve got you covered.

      Font Space

      Font Space is quite similar to DaFont. It has a great selection with tons of categories to choose from, and there are tons of user-submitted fonts, but its design is a little friendlier. Be aware that although Font Space has a big collection of unique fonts, the website does have a bit of a cluttered look with a few too many ads. If that’s just not your cup of tea, the other sites on the list should provide more than enough options. To ensure you’re only looking through free commercial use fonts, follow the steps below:

      • Pick a category
      • Check off “only show commercial use fonts”
      • Click apply

      Simply select the font you’d like to use and click download. Same as the other sites: unzip and double click the .otf or .ttf file to install.

      What we like best: Independent. People who submit fonts on this site seem to be talented solo designers — the kind of people you see on Creative Market. If you are particular about licensing, this site is incredibly explicit about your many options.

      The five websites listed are in ascending order: From best to less best based on ease of use and quality of fonts they’ve curated. The first three on the list are the best of the best when it comes to free commercial use fonts, but it never hurts to have more options!


      Every interface has a subset of users that make up the majority and minority. The majority of users usually have normal vision, while the minority have some form of visual impairment.

      There’s a big difference between what normal visioned users see versus what color blind and low vision users see. These users tend to experience blurry text and faint elements when text sizes and color contrasts are too low.

      The goal of accessibility is to meet the needs of the minority because they’re often forgotten. But what happens when meeting the needs of the minority ends up failing the needs of the majority? This issue occurs when the interface is made too accessible and isn’t balanced with aesthetics.

      Aesthetic Vs. Accessible

      In general, the more accessible an interface is, the less aesthetic appeal it has. Highly accessible interfaces are easier on the eyes of the visually impaired, but harsher on the eyes of the normal visioned. On the flip side, highly aesthetic interfaces are easier on the eyes of the normal visioned, but harsher on the eyes of the visually impaired.

      This aesthetic-accessibility paradox is what designers struggle with when they design interfaces. The challenge is to meet the needs of both the majority and the minority. However, if you veer too far into one extreme, you’ll alienate a subset of your users. Most people don’t want to alienate the minority. But alienating the majority of your users is just as bad as alienating the minority.

      Below are two forms that illustrate this concept. One form is AAA compliant and accessible to all visually impaired users. The other is not accessible at all but appeals to normal visioned users.


      For the normal visioned, the aesthetic form is easy on the eyes, while the accessible form is harsh. However, for the visually impaired, the accessible form is easier on the eyes, while the aesthetic form is harsher. Which form should you use?

      The correct answer is neither because neither form respects the aesthetic-accessibility paradox. They are designed toward opposite ends of the spectrum, which will either alienate the majority or minority.

      A truly accessible and aesthetic interface falls somewhere in the middle. Below is the form that respects the aesthetic-accessibility paradox. The color hues, contrasts, font sizes, and weights are AA compliant and balanced to meet the needs of both user groups. The result is an interface that’s easy on the eyes for nearly everyone.


      The Majority of the Minority

      Why isn’t an interface that’s balanced with aesthetics and accessibility easy on the eyes for everyone? Within the subset of the minority, there’s another majority and minority. The majority of the minority are users who don’t have extreme visual impairments and will be able to use a balanced design. However, the minority of the minority have extreme visual impairments that will still cause them issues.


      Designing for the smallest minority will make your design accessible to users with extreme visual impairments. However, your design will alienate normal visioned users who make up the majority of your base. For this reason, the best design is a balanced one that satisfies the largest minority.

      What about the needs of the smallest minority? Most users with extreme visual impairments use screen readers that provide high contrast modes. These high contrast modes allow them to use interfaces that have low contrast. It’s not necessary to design for the minority of the minority, but rather the majority of the minority. Designing for the largest minority means making your interface AA compliant.

      Local High Contrast Mode

      Sometimes a highly aesthetic or highly accessible interface is required based on the nature of a project. There’s a way you can provide users with these presentations without alienating any of your audience.

      If you want to maintain a highly aesthetic design, you should provide a local high contrast mode on your interface. A local high contrast mode is a toggle button on the page that allows users to enhance the contrast of text and elements. On the other hand, if you want to provide users with a highly accessible design, make your high contrast mode AAA compliant.

      However, the challenge is getting users to notice and use it. Make sure it’s visually prominent, or they’ll overlook it. The example below shows a button for high contrast mode, but it’s in an obscure form and location. If you decide to implement a local high contrast mode, follow these guidelines.


      The Importance of Aesthetics

      Accessibility extremists tend to discount aesthetics. They believe an interface should be as accessible as possible for the minority without considering how it affects the average user. These extremists need to understand and respect the aesthetic-accessibility paradox before demanding the highest degree of accessibility.

      Aesthetics isn’t a subjective and trivial attribute used for ornamentation. It serves an important purpose in the user experience. It determines whether users trust your app, perceive it as valuable, or are satisfied using it. In other words, aesthetics affects user engagement and conversion rate. Discounting it is not only bad for users, but bad for business.

      Striking a Balance

      Balancing aesthetics and accessibility isn’t easy, but it’s necessary for a great user experience. The cross-section of the aesthetic-accessibility spectrum is the balance point for designing interfaces that satisfy the most users. Avoid designing at the extreme ends of the spectrum and respect the aesthetic-accessibility paradox.


      Being mindful of this paradox will help you make design choices that include the visually impaired, without excluding the normal visioned. When you’re designing for a wide range of people, extremism toward either an aesthetic or accessible direction is not the best approach. Finding the middle ground is the best way to reach and satisfy as many users as possible.





      Hoefler&Co is a New York-based agency that creates original fonts for individual clients, but also does retail typography. Founded in 1989 by Jonathan Hoefler, the agency draws inspiration from historical models and was the first one to introduce the stylistic reinterpretation of grunge motifs such as Soviet house numbers, metal lettering on bus terminals, engraved maps and old gas stations, transforming them into typefaces and giving them a fresh, unique look. Although best known for the Gotham font, their base of over 1500 typefaces has expanded to everyday objects, so today we can find their fonts on every iPhone, can of Coke, on Twitter, Wired, The Wall Street Journal. Furthermore, their typefaces have been included into permanent collections of MoMA and the Smithsonian Institute.

      Top WordPress Themes for All Creatives

      Manon WordPress Theme


      Portfolio & Agency Theme

      Cinerama WordPress Theme


      A Theme for Movie Studios and Filmmakers

      Amedeo WordPress Theme


      Multi-concept Artist and Creative Agency Theme


      Facebook is bringing more machine learning-driven capabilities to its ads platform to improve personalization, the company announced this week.

      Dynamic formats and ad creative for Dynamic Ads. Advertisers will now be able to automatically serve different ad formats to audiences based on the machine learning model’s prediction of a user’s format preference. Campaign managers can access this capability from the Facebook Dynamic Ad unit when creating ads for the catalog sales, traffic and conversions objectives in Ads Manager or the API.

      “The dynamic formats and ad creative solution aims to meet people where they are in the customer journey by delivering a personalized version of the ad to everyone who sees it,” Facebook said.

      Multiple text optimization in single-media ads. As we reported last month, Facebook has also rolled out a responsive ad feature that allows advertisers to input multiple text options for the primary text, headline and description fields when building single-media ads for traffic, app installs, and conversions objectives. Facebook said it uses this data to optimize for delivery and performance using variations of the text options provided, based on individual preferences identified by its machine learning models.

      Auto-translated languages for single-media ads. In Ads Manager, advertisers can now add different languages to be auto-translated for international audiences. By automatically producing translations for key languages, this feature can help speed up the campaign setup process while still giving advertisers control over the review process.

      Why we should care. For smaller organizations, creating highly personalized content can be challenging if time and resources are limited. By leveraging machine learning to dynamically select ad formats, creative, and text on the ad impression-level, advertisers can more efficiently deliver campaigns that reach the right audience, at the right time, with less effort.

      More on the news. Facebook machine learning blends data from its platform with target audience insights in order to predict which people might be most receptive to a brand’s message. As people take different actions on and off Facebook, the company explained, intent signals are created to help achieve a more one-to-one experience for consumers served with both organic and paid content. Facebook said it’s committed to its investment in prediction models that can help build stronger business outcomes and better digital experiences for customers.

      In a test of 12 e-commerce and retail advertisers, Facebook reported that dynamic formats and dynamic ad creative outperformed carousel-only ads in driving lift across content views, add-to-cart, purchases, and sales. Facebook said the showed an average of 34% improvement in incremental ROAS, 10% improvement in lift and 6% lower cost per incremental purchase, compared to carousel-only ads.

      About The Author

      Taylor Peterson is Third Door Media’s Deputy Editor, managing industry-leading coverage that informs and inspires marketers. Based in New York, Taylor brings marketing expertise grounded in creative production and agency advertising for global brands. Taylor’s editorial focus blends digital marketing and creative strategy with topics like campaign management, emerging formats, and display advertising.


      TikTok, the short-form video app owned by ByteDance, is testing shoppable video posts, making it possible for influencers on the platform to place social commerce URLs within their posts. The test was shared by Fabien Bern, owner of a Chinese influencer agency, who posted an example of a shoppable video post via his Twitter account. Bern’s example showed an influencer video that included a URL within the video leading to the influencer’s Amazon product page.

      TikTok later confirmed to Adweek that it was testing the feature, but did not disclose details on whether or not it would receive a wider roll-out. Marketing Land has reached out to TikTok for comment.

      Why we should care

      Social commerce is gaining traction as platforms add new capabilities to support direct-to-consumer and big brands’ e-commerce goals. Platforms are racing to give advertisers tools to make the journey from product discovery to purchase as seamless as possible — Instagram’s in-app check-out feature and Pinterest’s move to expand Shoppable Pins to visual searches are among the recent examples.

      TikTok entering the social commerce fray demonstrates the platform is experimenting with ways to appeal to the commercial interests of creators and potential advertisers. With more than 500 million global users, brands are eyeing opportunities to reach its Gen Z audience.

      More on the news

      About The Author

      Amy Gesenhues is a senior editor for Third Door Media, covering the latest news and updates for Marketing Land, Search Engine Land and MarTech Today. From 2009 to 2012, she was an award-winning syndicated columnist for a number of daily newspapers from New York to Texas. With more than ten years of marketing management experience, she has contributed to a variety of traditional and online publications, including MarketingProfs, SoftwareCEO, and Sales and Marketing Management Magazine. Read more of Amy’s articles.


      Finally, it’s time to discuss the people you’re trying to reach with your email messages – your Audience. Here we shift to the Optimization section of the Periodic Table of Email Optimization and Deliverability, which we’ve been exploring in recent weeks. 

      The most important element of all is the Email Address (At), the unique identifier for your subscriber. This is the most valuable piece of information that we can collect. 

      Email addresses are typically assembled into Lists (Ls) – groups of email addresses that are uploaded to distribute email messages to. Each List, however, can be divided into various cohorts — depending on the amount of data you have on them — via Segmentation (Sg). Segmentation allows marketers to choose smaller groups of customers, or potential customers, and communicate with them in a manner that is specifically tailored to their particular demographics, locations or behaviors. Personalization (Me) refers to the practice of using subscriber data to tailor-make content for individuals based upon the information you have about them. 

      This information can be collected and augmented through the use of an email Preference Center (Pc), which is an interface for subscribers that allows them to manage their subscription preferences. Preference Centers can be used to allow recipients to tell senders how best to serve them by expressing their interests or indicate how often they’d like to receive communications. 

      Beyond Preference Centers, marketers can learn a lot about subscribers’ likes and dislikes by observing how they interact with messages that are sent. Recipients use an email Client (Cl) to either download their email or access their email through a web interface. The first metric marketers look at is the Open (Op), which occurs when a recipient actually opens an email. Next, marketers can look at whether users Click (Ck), or interact with a link within the email, which demonstrates their Engagement (Eg) with the content. 

      When marketers are analyzing their Audience’s interaction with the emails they’ve sent, two of the metrics that often have an impact on engagement are Send Time (St), the time the email is sent (typically looked at in terms of the recipient’s time zone) and Send Frequency (Sf), how often a given email address or list receives emails from the brand.

      More about the Managed Inbox

      About The Author

      Pamela Parker is Senior Editor and Projects Manager at Third Door Media’s Content Studio, where she produces Martech Intelligence Reports and other in-depth content for digital marketers in conjunction with Search Engine Land, Marketing Land, MarTech Today and Digital Marketing Depot. Prior to taking on this role at TDM, she served as Content Manager and Executive Features Editor. Parker is a well-respected authority on digital marketing, having reported and written on the subject since its beginning. She’s a former managing editor of ClickZ and has also worked on the business side helping independent publishers monetize their sites at Federated Media Publishing. Parker earned a masters degree in journalism from Columbia University.


      One would think that with more than 7,000 martech tools available to create exceptional digital customer experiences, businesses would have little trouble building a marketing stack that delivers a seamless customer journey — but that doesn’t appear to be the case, according to Simpler Media’s latest State of Digital Experience report.

      Of the 325 digital customer experience executives surveyed by Simpler Media, 79% said digital customer experience (DCX) was an extremely or very high priority for their organization. Yet, nearly half (46%) reported they had yet to invest in personalization tools or had no budget in place for such technology. Just 9% said they are using personalization tools, while 27% said they are testing them.

      How would you describe your organization’s use of digital customer experience personalization tools?

      “We can’t take personalization out of the broader digital experience,” said Siobhan Fagan, managing editor of CMSWire. Fagan presented the report findings during her keynote at the DX Summit in Chicago. “You need, of course, the technology that makes it all possible.”

      The 9% figure is actually a drop from last year when 14% said they were seeing results with personalization — not a strong sign in terms of how personalization tools or their utilization have evolved over the past year.

      Businesses want more customer insights, but fail to utilize metrics

      Prioritizing DCX, but not personalization tools, wasn’t the only disconnect revealed by the report.

      Ninety percent of the DCX executives surveyed said they lacked insights that explain why their customers are doing what they’re doing, with only 11% reporting that they understood their customers’ behavior well — 33% rated their understanding of their customer behavior as poor. Fifty-two percent said they “moderately” understood their customer behavior (which, honestly, sounds only slightly better than “meh”).

      At the same time, the majority of the survey respondents (57%) reported they were using just three or fewer metrics. Of the companies using one to two metrics, 54% claimed the “effectiveness” of their measurement tools needed work. For the companies using three to four metrics, 48% said the “effectiveness” of the tools needed work.

      This may be changing, however. When asked to name their organization’s DCX investment priorities, 59% — the largest share — listed analytics, insights and dashboarding. Also, nearly a third of the survey respondents said improved analytics was a top customer data management priority for their organization.

      Silos are a top challenge for DCX teams

      When listing their top challenges in terms of creating and implementing effective DCX efforts, 47% said siloed systems and fragmented customer data and 42% reported limited cross-department alignment and collaboration. Arguably, these two challenges — siloed systems and limited cross-department alignment — could fit under the same umbrella. In other words, a large majority of DCX professionals want more connectivity across organization departments.

      This finding backs up customer experience expert Kerry Bodine’s premise that silos are the biggest obstacle in the customer journey. While silos may enable businesses to make things happen at scale that would otherwise be impossible, they are terrible for customers who are trying to accomplish different tasks, says Bodine.

      The DCX industry’s take on martech

      Looking at the nuts and bolts of the systems used to create digital customer experiences, 46% said their current platforms and tools need work. Forty-one percent said the tools they used were satisfactory and 13% reported they worked well. At the same time, 60% reported they were currently evaluating, upgrading or shopping for new CMS platforms or digital experience platforms. Translation: There are more DCX executives who are either not happy — or would rate their tools “satisfactory” — than the number of executives actually looking to invest in new platforms.

      Interestingly, on the CDP front, 23% reported having already purchased a CDP, and more than a third (34%) are evaluating options. Meanwhile, 13% have decided against a CDP and 28% are undecided. It’s a close split, but the fact that more DCX executives are either already committed to using — or are evaluating — a CDP shows the industry is open to adopting systems that can help build a unified view of the customer.

      “Nobody wants to go back to the days when customers are viewed and treated the same,” said Fagan from the DX Summit stage, “We need to look at our digital experience efforts as part of a bigger whole, not piecemealed together.”

      About The Author

      Amy Gesenhues is a senior editor for Third Door Media, covering the latest news and updates for Marketing Land, Search Engine Land and MarTech Today. From 2009 to 2012, she was an award-winning syndicated columnist for a number of daily newspapers from New York to Texas. With more than ten years of marketing management experience, she has contributed to a variety of traditional and online publications, including MarketingProfs, SoftwareCEO, and Sales and Marketing Management Magazine. Read more of Amy’s articles.


      Smart marketers know that continued training is the key to staying ahead of the competition and achieving professional goals. That’s why they’re attending a completely revamped and redesigned Search Marketing Expo – SMX West.

      Join us in San Jose, February 19-20, 2020, to cross-train on the most important elements of successful search marketing campaigns… you know, the traffic-driving, lead-generating, revenue-boosting ones.

      The Search Engine Land experts — the industry publication of record — have reorganized the agenda into three search marketing lanes with no learning limits: You’ll get double the amount of tactic-rich SEO and SEM sessions SMX is known for, together with brand new programming on digital commerce marketing.

      Here’s a taste of the 60 sessions in store:


      • Site Speed Is A Crucial Ranking Factor. Here’s What You Need To Know.
      • Future-Proof Your Sites For Google Core Updates
      • The New NoFollow And Snippet Rules


      • Tactical Tips For Creating More Persuasive Ads
      • Understanding And Analyzing The Customer Journey
      • Automation: How To Push Forward; When To Push Back

      Digital Commerce Marketing (new)

      • Driving Profitable Sales With Amazon Search Ads
      • Optimize Your Holiday Plans To Beat The Competition
      • SEO For E-commerce Category, Product Detail Pages

      Bonus! The Search Engine Land experts have carved out room in the agenda to tackle late-breaking topics (BERT, anyone?), so you’re sure to get the most up-to-date advice and insights possible.

      Register with the confidence of the SMX guarantee — and do it now to lock in our best rates: up to $700 off on-site prices!

      • All Access: The complete SMX experience — all sessions, keynotes, clinics, networking events, and amenities, including hot meals, refreshments, free WiFi, speaker presentation downloads, the SMX mobile app, and more. Book by December 21 and save $450 off on-site rates.
      • All Access Workshop (best value): Maximize your learning with a full-day post-conference workshop. Book by December 21 and save $700 off on-site rates.
      • Expo : Interested in growing your network and meeting vendors? This free pass gets you the entire Expo Hall, sponsored sessions, Q&A clinics, refreshments, select networking, WiFi, the SMX mobile app, and more. Book now and it’s… free!

      For over a decade, more than 70,000 marketers from around the planet have trusted SMX with their professional training needs. Make this the year you take your company (and your career) to a new level of success.

      Register now and I’ll see you in San Jose!

      Psst… Need approval from your manager? We’ve created these helpful Get Your Boss On Board resources including talking points, pricing breakdown, and templated letter.

      About The Author

      Lauren Donovan has worked in online marketing since 2006, specializing in content generation, organic social media, community management, real-time journalism, and holistic social befriending. She currently serves as the Content Marketing Manager at Third Door Media, parent company to Search Engine Land, Marketing Land, MarTech Today, SMX, and The MarTech Conference.


      Join nearly 2,000 senior-level marketers to learn actionable tactics, best practices, and proven solutions that can help you overcome your toughest marketing challenges.

      • More

      Nearly 2,000 senior marketers like you will attend MarTech® next April 15-17 in San Jose, CA. Join them to learn actionable tactics, best practices, and proven solutions that can help you overcome your toughest marketing challenges.

      The site is up and registration is open! Book your pass now to secure the lowest rates available — up to $900 off on-site prices. (Pro-tip: Register before year end to use up any unallocated 2019 budget.)

      I’ll reach out soon with more details. In the meantime, catch up with coverage from last September’s MarTech Conference in Boston!

      Psst… Planning your company’s 2020 professional development activities? MarTech is a great value for teams. Register now!

      About The Author

      Lauren Donovan has worked in online marketing since 2006, specializing in content generation, organic social media, community management, real-time journalism, and holistic social befriending. She currently serves as the Content Marketing Manager at Third Door Media, parent company to Search Engine Land, Marketing Land, MarTech Today, SMX, and The MarTech Conference.


      As the privacy landscape continues to shift in favor of user privacy, inbox providers are updating their policies and terms of service agreements in efforts to provide more security and transparency to users. But email service platforms and deliverability providers are feeling the effects — and they are trickling down to email marketers who rely on third-party providers that provide inbox data.

      By enforcing their terms of service and data use policies, Verizon Media will no longer allow bot-controlled inboxes to report data to providers.

      Why we should care

      Earlier this year, Google tightened its grip on third-party developers that were in violation of its policies and scraping data from Gmail inboxes. Traditional methods of aggregating panel data on message characteristics and campaign performance over time have changed; providers including ReturnPath and eDataSource developed AI-driven panel replicators in response to the changes and to continue delivering panel data to their customers. The trouble with this new panel data is that it mimics human interactions, rather than informing email marketers with real user data.

      “If Verizon is making this move, it will lead to systemic changes for marketers, but it is unclear ultimately where this will land,” said Chris Adams, chief technology officer of eDataSource and architect of IntelliSeeds®. “Verizon Media could choose to allow access for deliverability companies to have insights into inbox placement from traditional seed and smart seed solutions. I do believe that inbox placement insights are crucial for marketers. They care about sending email that is valued by their recipients’ and they need insights to serve them well.”

      Inbox providers (like Verizon) that are cracking down on bots and AI-driven solutions will need to collaborate with deliverability companies and email service providers to provide inbox insights. Without these insights, email marketers could be left in the dark.

      “Companies like eDataSource have played a critical part in helping marketers follow best practices, and send mail that recipients want to read, even when it means sending less email to their subscribers,” said Adams. “If Verizon Media puts a blanket ban on inbox placement monitoring, which will impact all the players in the space, I suspect that Verizon Media may make this type of anonymized data available to deliverability companies and senders. “

      More on the news

      • Google announced in mid-2017 that it would no longer scan Gmail users’ inboxes for ad targeting purposes.
      • Google announced plans to strengthen its security on third-party developers starting in early 2019.
      • It is unclear how other major inbox providers such as Microsoft will respond to the changes from Google and Verizon Media

      More about the Managed Inbox

      About The Author

      Jennifer Videtta Cannon serves as Third Door Media’s Senior Editor, covering topics from email marketing and analytics to CRM and project management. With over a decade of organizational digital marketing experience, she has overseen digital marketing operations for NHL franchises and held roles at tech companies including Salesforce, advising enterprise marketers on maximizing their martech capabilities. Jennifer formerly organized the Inbound Marketing Summit and holds a certificate in Digital Marketing Analytics from MIT Sloan School of Management.


      Did you know ESPN the Magazine ceased publication recently? No? Well, you’re not alone. A Washington Post article about the magazine’s impending demise elicited just 15 comments. (A recent WaPo story on Brexit, by contrast, generated almost 1,500 comments.) And one of those 15 comments was “(Yawn) … Nobody cares!!!”

      It’s hardly breaking news that print is no longer a practical platform for delivering news. As ESPN said in a statement explaining its decision to euthanize the magazine: “Consumer habits are evolving rapidly, and this requires ESPN to evolve as well. The only change here is that we are moving away from printing [the magazine] on paper and sending it in the mail.”

      The only change? Maybe for consumers, it’s not a big deal. But for advertisers, the change from print to various other ephemeral platforms is one of the most consequential shifts in the industry in the last 50 years. And the brands that figure out how to adapt to this new marketing landscape will be the ones that dominate.

      Hint: Just sinking your ad budget into mobile devices is not the answer. In fact, it’s a dangerous delusion.

      The cash is always greener on the other side

      Simply put, the Great Mobile Migration largely disregards the role that the consumer’s whereabouts play in purchase decisions.

      Location, location, location — it matters almost as much in advertising as in real estate. Serve an ad in a situation in which the consumer is likely to act, and you will get a good return (assuming a product/market fit, of course). Serve an ad in a situation in which the consumer is unlikely to act, and you are wasting money. Lots of it.

      History shows that the most successful advertising channels have been the ones that have reached consumers in their homes, where the vast majority of purchase decisions are made. Start with newspapers and progress through mail-order catalogs to glossy magazines, radio and finally television, and that’s been the common denominator.

      And yet ad spend has declined across the board on those channels, with TV down 2.2% this year, and print editions of newspapers and magazines down 18%. Meanwhile, led by the dynamic duopoly of Facebook and Google, digital will become the new leader in ad spend for the first time ever in 2019.

      The simple explanation actually isn’t

      The reason for the decline in traditional in-home advertising channels is complicated. The obvious explanation is that magazine ads and TV commercials piggyback on content. And as people have changed the way they consume that content – reading articles and watching video and live-streamed events on personal devices – traditional delivery systems like magazines have fallen out of favor. And advertisers have fled in droves.

      But marketers have struggled to adapt advertising to the new delivery systems. (There’s no way to replicate the old double-truck magazine spread on a small screen, for example.) The advent of mobile has further complicated the process. Now even when reformatted ads reach consumers, those consumers are often outside the home — meaning the likelihood of activation is often remote. In many cases, the ads still don’t work, at least not as effectively as ads in the time-honored in-home channels used to (and still do in some cases).

      Considered purchases reconsidered

      To make things even more complicated, the internet has enabled a whole generation of digitally native vertical brands resulting in a vastly expanded consumer ecosystem. Niche markets abound. Search has made information readily available in such vast quantities that it has changed people’s spending habits. Almost every transaction has become a considered purchase.

      Consider the humble granola bar.

      Not so long ago, a box of granola bars was unequivocally an on-location buy. You made a decision on the spot, in the aisle at the grocery store. There weren’t many brands (or flavors) available, so your decision was based on either a Pavlovian response to a brand name you recognized from a TV or magazine ad or a pre-existing preference (nuts versus no nuts). You might spend a moment reading the labels, but that would be for research. Where else would you get information about granola bars?

      Now, of course, the answer to that question is: from the internet. Google “granola bars” and you’ll get almost 6 million results, including starred reviews. (Just a hunch, but you might want to avoid the bar that one reviewer said “smelled like chemicals or some sort of laundry detergent.”)

      The impact hits home

      ‘The psychological impact of even small, inexpensive items becoming considered purchases is significant. Money isn’t the point. The risk isn’t economic, it’s emotional. The more choices that are available to you, and the more information that’s available about those choices, the greater your fear of making a mistake — especially if you’re buying the product for someone else. After weeding through that endless array of granola bars before finally making a choice, the last thing you want is to get that look from your spouse when you get home. What did you buy these for? You know I hate cranberries! (And you don’t want to make things worse by admitting that, um, actually you didn’t know that.)

      Once again, you’ve learned that age-old lesson: The most successful purchase decisions are the ones you make at home, in collaboration with your family.

      Marketers who figure out how best to enable that process with their advertising are the ones who will survive and prosper in a rapidly evolving digital landscape. And the answer isn’t simply to abandon traditional in-home advertising channels. It’s to make them work effectively again.

      Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.

      About The Author

      Lewis Gersh is founder and CEO of PebblePost, guiding corporate strategy and company vision with over 20 years of board and executive management experience.

      Prior to PebblePost, Lewis founded Metamorphic Ventures, one of the first seed-stage funds, and built one of the largest portfolios of companies specializing in data-driven marketing and payments/transaction processing. Portfolio companies include leading innovators such as FetchBack, Chango, Tapad, Sailthru, Movable Ink, Mass Relevance, iSocket, Nearbuy Systems, Thinknear, IndustryBrains, Madison Logic, Bombora, Tranvia, Transactis and more.

      Lewis received a B.A. from San Diego State University and a J.D. and Masters in Intellectual Property from UNH School of Law. Lewis is an accomplished endurance athlete having competed in many Ironman triathlons, ultra-marathons and parenting.


      Color blind friendly palette

      Colors play a central role in data visualization. But what if your readers are color blind?

      According to Color Blind Awareness, color blindness affects 1 in 12 men (8%) and 1 in 200 women (0.5%). There are an estimated 300 million color blind people worldwide, including Mark Zuckerberg, Bill Clinton and Prince William!

      Color blind friendly palette - famous color blind people

      Optimizing your graphics can help make them more accessible—and that doesn’t mean banishing color from your charts and graphs either. Contrary to popular belief, more than 99% of color blind people can, in fact, see color—just not in the same way as someone who isn’t impacted by color blindness.

      In this guide, we’ll break down the different types of color blindness and their special considerations with respect to data visualization. We’ll also show you how to effectively use color blind friendly palettes to make your graphics available to a wider range of people.

      We’ve put together some ready-made color blind friendly palettes to inspire you. Scroll down to the section 4 for that.

      Table of Contents

      1. What is color blindness?
      2. What are the 3 types of color blindness?
      3. Why color is important to data visualization
      4. How to design charts with a color blind friendly palette
      5. Key takeaways

      First thing’s first…

      What is color blindness?

      Color blindness—also known as color vision deficiency (CVD)—is a deficiency in distinguishing between different colors. It occurs when light-sensitive tissue in the back of the eye—the retina–fails to properly respond to variations in wavelengths of light, which enable people to see different colors.

      Color blind friendly color palette - retina


      Although color blindness is primarily an inherited condition, it can also result from cataracts or trauma to the eye, as well as a number of diseases, including Parkinson’s, Kallman’s Syndrome and diabetes. Damage to the retina caused by aging can additionally cause color blindness.

      What are the 3 types of color blindness?

      Color spectrums of the 3 types of color blindness


      There are three distinct types of color blindness:

      Red/Green Color Blindness

      According to Colblindor, 99% of all color blind people suffer from red/green color blindness. This is further broken down into two distinct categories:

      • Protanopia: Also referred to as “red weakness”, individuals who suffer with this variation of red/green color blindness are unable to perceive red light.

      Normal vision compared with protanopia

      Left: Normal vision, Right: Protanopia


      The protanopia color spectrum looks like this:

      Protanopia color spectrum


      • Deuteranopia / Deuteranomaly: Also known as “green weakness”, this type of red/green color blindness renders people insensitive to green light (deuteranomaly) or unable to perceive any green light at all (deuteranopia).

      Normal color vision compared to color blindness

      Left: Normal vision, Right: Deuteranopia


      The deuteranopia color spectrum looks like this:

      Color blind spectrum - Deuteranopia


      Red/green color blindness is inherited, and affects males more than females. The reason for this is because the genes responsible for this condition are contained within the X chromosome.

      Color blind friendly palette - statistic

      Because women have two X chromosomes, any deficiencies in one will typically be compensated by the other. Men, however, only have one X chromosome, and don’t have a “spare” to fall back on.

      Color blind friendly palette - statistic

      Blue/Yellow Color Blindness

      People who suffer from blue/yellow color blindness have difficulty distinguishing between blue and yellow colors.

      This particular form of color blindness–also known as Tritanopia–is far less common than its red/green counterpart. In fact, according to the U.S. National Library of Medicine, blue/yellow color blindness affects fewer than 1 in 10,000 people worldwide.

      Because Tritanopia isn’t caused by a genetic trait linked to the X chromosome, it is equally present in both men and women.

      Color blind friendly palette - Tritanopia

      Left: Normal vision, Right: Tritanopia


      The Tritanopia color spectrum looks like this:

      Tritanopia color spectrum


      Complete Color Blindness

      People who suffer from complete color blindness—also known as Monochromacy—are unable to distinguish between any colors at all. Monochromacy is extremely rare, occurring in 1 in 33,000 people, according to Colour Blind Awareness.

      Complete color blindness


      Why is color so important in data visualization?

      Color plays a significant role in data visualization. For starters, colors are used to highlight important information, as well as illustrate relationships between various types of data. Color also plays an instrumental role in guiding the viewer’s eye. It can even be used to stimulate emotion through color psychology.

      Disney color wheel

      You can even argue that the colors used in charts and infographics actually tell the story of the data they represent. As Eva Murray writes in Forbes:

      In data visualization, color sets the tone and enforces a message for the underlying visual display. It creates a certain atmosphere and can turn an unassuming visualization into an emotion-filled data story.

      WATCH: Everything you need to know about COLOR… in under 3 minutes

      Knowing how to pick colors is important, but when you’re optimizing for accessibility, there’s more to consider than color theory and composition.

      HOT TIP: Venngage’s Brand Kit feature makes it easy to automatically import your brand colors and brand fonts into your designs.

      You can also use Venngage’s color picker tool to easily switch out different colors, a particularly useful tool for working with color blind friendly palettes.

      Color blind friendly palette - editor

      How to design charts with a color blind friendly palette

      1.) Plan out your color scheme beforehand

      Using a color blind friendly palette doesn’t mean you need to compromise on aesthetics or strip out all the color from your charts. By planning ahead, you can ensure a color blind friendly palette compliments your design, rather than clashes.

      Color blind friendly color palette


      Even with a tool like Venngage, which can drastically scale back the number of hours required to create an infographic, it’s still ideal to flesh out your color scheme before you dive in and start a new project.

      After all, if it turns out your colors aren’t particularly suited to color blind people—and accessibility is important to you—it’s better to find this out before you’ve invested considerable time and effort in the design.

      If you’re not sure where to start, these color blind color wheels can be a useful reference to quickly determine how your colors will translate across different forms of color blindness:

      Color blind friendly color palette


      Better yet, we’ve put together some versatile color blind friendly palettes:

      Color blind friendly color palette

      HOT TIP: If you’re stuck, try using a tool like Coblis, a free color blind simulator which will give you a better sense of how your images will appear to people with color blindness.

      2.) Avoid problematic color combinations

      Another reason you should plan out your color scheme beforehand is so you can avoid using problematic color combinations, some of which can make your charts or infographics completely inaccessible to color blind people.

      Color combinations to avoid for people with color blindness include:

      • Red & green
      • Green & brown
      • Green & blue
      • Blue & gray
      • Blue & purple
      • Green & gray
      • Green & black

      If you’re stuck and absolutely must use one of these combinations, try making adjusting the shades so one is extremely dark, and the other extremely light—most color blind people can detect contrast, so this likely will make things easier.

      Color blind friendly palette - Problematic colors

      3.) Use highly contrasting colors

      As previously mentioned, contrast isn’t an issue for most people who are color blind. Darken and lighten your colors, respectively, to make the contrast more pronounced. You can also play with different hues, as well as levels of brightness and saturation.

      Color blind friendly palette - template

      4.) Use patterns and textures

      Using patterns and textures in your design is a great way to incorporate visual elements into your design which don’t rely on color.

      Color blind friendly palette - bar chart


      Venngage’s drag-and-drop design editor features a library of patterns you can use to enhance your design and make it more accessible.

      Venngage editor - patterns

      5.) Use symbols

      Symbols and icons are a great way to make your designs more accessible because they can visually punctuate a message, without relying on color.

      Color blind friendly palette - icons

      Hot Tip: Venngage’s library includes over 4,000 icons you can use to make your infographics and charts more accessible.

      Color blind friendly palette - icons

      6.) Make your chart monochromatic

      Because color blindness doesn’t impact the ability to distinguish between different shades, consider making your charts monochromatic. Using a restricted color palette will naturally mean there are fewer opportunities to use problematic colors or combinations.

      Color blind friendly palette - template

      Color blind friendly palette - template

      7.) Keep your design clean and simple

      When you’re designing for accessibility, it’s important to keep your infographics as simple and streamlined as possible. Adopting a minimalistic approach will help ensure your presentations are more easily understood, whether or not your audience is color blind.

      Key takeaways for using color blind friendly palettes

      Optimizing your infographics and charts for people affected by color blindness is important for both accessibility and inclusivity. It’s also possible to do without compromising the aesthetic quality in the process.

      The following measures can help ensure color blind people are actually able to engage with your designs:

      • Selecting color blind friendly palettes
      • Avoiding problematic color combinations
      • Using different textures and patterns to highlight important information–not just color
      • Using symbols and icons to supplement color-coded messages, warnings and alerts
      • Using highly contrasted color combinations
      • Adopting minimalistic design to help avoid unnecessary confusion.

      Combining all these elements will help make your designs more accessible to color blind people. After all, the more people you can engage with, the more impactful your work will be.

      Related blog posts:

      6 Ways to Use Infographic Icons Like a Pro

      The Do’s And Don’ts of Infographic Color Selection

      How to Pick Colors to Captivate Readers and Communicate Effectively


      Tesla CYBERTRK logo

      (Image credit: Tesla)

      Weren’t the ’90s great? Britpop! Girl power! Net-surfing in cyber-space over a dial-up connection! What a time to be alive, and we’re getting huge ’90s vibes off a new US trademark filing from Tesla and its accompanying logo design.

      Elon Musk tweeted last week that Tesla would be revealing its latest electric vehicle on 21 November, near the SpaceX rocket factory in Los Angeles, and that it would be the much rumoured Tesla pickup truck. 

      Musk also revealed its name: Cybertruck. Possibly the most ’90s name that’s ever been given to any sort of vehicle, it harks back to a time when adding ‘cyber’ to the beginning of any word instantly bestowed it with ultimate hi-tech cool (funnily enough, we didn’t mention adding ‘cyber’ to your logo in our guide to logo design).

      Cybertruck unveil on Nov 21 in LA near SpaceX rocket factoryNovember 6, 2019

      It gets better, though. The same day as Musk tweeted his announcement, Tesla filed a pair of trademark applications at the US Patents and Trademarks Office, which were uncovered this week by an online sleuth at the Tesla Motors Club.

      Not only has Tesla filed a trademark application for the name ‘Cybertruck’, it’s also put in another for the name ‘CYBRTRK’, complete with a logo that could have come straight out of The Designers Republic during its imperial ’90s phase. That was back when it was responsible for the look of one of the coolest videogames on the planet, WipEout on the PlayStation. Seriously, get a load of that wordmark:

      Tesla CYBERTRK logo

      You remember CYBRTRK. Came third in the first series of Robot Wars

      (Image credit: Tesla)

      It’s not an entirely authentic take on that ’90s TDR look; back then we were still using entire words rather than removing half the letters for stylistic effect, but we’ll let that pass. What gives it that distinctive retro flavour is those totally pared-back letterforms, removing every single extraneous detail while still just about communicating each letter’s identity.

      We also love the JPG artefacts on the version of the logo posted in the trademark filing, just because it gives the logo the appearance of having been sent by fax, back when sending images by email was still considered poor netiquette.

      We should also mention the wordmark filed along the ‘Cybertruck’ application (below), just because it’s also really ’90s in its own way: ‘CYBERTRUCK’ in block capitals that look an awful lot like Times New Roman. Hey, everyone loved serifs in the ’90s, and Times New Roman was honestly quite cool back then; we also wouldn’t be surprised if Musk had deliberately chosen it to annoy the sort of non-pickup-driving people (i.e. us) who think Times New Roman is past it.

      Tesla CYBERTRUCK logo

      Now that’s just straight-up typographic trolling

      (Image credit: Tesla)

      As for the CYBRTRK itself, no-one knows what that looks like yet and you’ll have to wait until 21 November for the reveal. Could it possibly be as awesome as Simone Giertz’s Truckla, made from an actual chopped up Tesla?

      Somehow we doubt it. But we’re pleased to note that Musk has invited Giertz and Truckla to the big reveal next week.

      Related articles:


      A movie’s logo is often one of the first things that fans see, whether it’s on posters or in a trailer, so it’s not uncommon for studios to spend large amounts of money creating them to build excitement. There’s a lot at stake, and there are entire agencies that specialize in logo creation, with logos passing through the hands of multiple stakeholders and going through several revisions before the public sees them.

      That’s why I was so surprised when I discovered that the logo for the DC film Joker was created by one designer, Chad Danieley, with wood type letterpress. In a Reddit post, Danieley explained how going analog captured the film’s tone and feel better than a digitally created logo would have. I reached out to Danieley to learn more about the logo creation process, his influences, and personal projects.

      This interview has been lightly edited for clarity.

      Chad Danieley in his garage.

      To kick things off, want to tell us a little bit about yourself and your background?

      I studied graphic design / motion graphics at ArtCenter [College of Design] in Pasadena. They had a great letterpress studio that was like my second home. In my final two terms at ArtCenter, I interned with [graphic designer] Pablo Ferro (The Thomas Crown Affair, Dr. Strangelove). I would scan his work into the computer for his archives, which would always include hours of stories about his process.

      For his signature style in Dr. Strangelove, he would draw over fonts for the main title, which is a unique approach. I would try the same sample sheets, and they would look nothing like his. Same with letterpress. If another person used the same wood type, it would have a different feeling for someone else. The personality comes through in the beat-up / dented faces of the type as well as the inking style and how much packing the artist adds or takes away that makes it interesting. Ferro had a strong impact on my career and my process early on.

      After graduation, I freelanced at a few motion studios for five years, then went to staff at NBC’s on-air department for eight years. For the last three years, I have returned to freelancing.

      How did you join this project?

      I was freelancing at Elastic on logos for the show See, and the Joker project came in, and they needed help with pitching logos for Joker. I spent two days on the computer making a bunch of logos in Illustrator, but in the end, it was the first one I did that is the “JOKER” logo as you see it today, “as is” with no changes asked, definitely a first for me.

      Can you talk us through the creative process for this logo and letterpress in general?

      Very little information was given for me to go off. All that was said is that it’s much darker and more visceral than the usual DC Jokers. “Think Taxi Driver.”

      I have an HD full of letterpress-printed sample sheets, and within each print run, I’ll print several states. For this project, I did aggressive / transparency / scraping / noise of that one typeface, then scanned and cataloged each of them. I am not a “purist,” but I have always needed to bring my work outside of the computer by printing it out and adding some of myself into it. Physical objects, like wood type or Letraset, rub down type and help me perceive space in a different way. I don’t always need to work this way, but it does help.

      It’s great to see that this was made using a letterpress. It might be all in my head, but something about it just feels textile and real as opposed to, say, a digitally created design. Was there any post-work done, or is this pretty accurate to the original?

      Yes, even after the logo was selected, we tried to make a version that was done in Illustrator, but it just didn’t feel right. The wood type is structurally warped, and I think that’s what makes it what it is. Regarding post-work, not much was done to the original presented logo. WB added slight color to the logo for each poster. The original scan of the print I used was a little transparent, so I just bumped up the levels in Photoshop. But other than that, it’s very accurate to the original print.

      What font did you use? I actually can’t make it out!

      It’s an unnamed Gothic. I tend to gravitate toward Gothic digital fonts as well as Gothic wood type as workhorses. They don’t have a lot of style baggage attached to them, which allows me to prod and scrape at them to really turn them into what I’m looking for. When I start adding a lot of texture, I need the typeface to not have a loud voice. Gothic gives me that.

      Are there any alts out there? Did it take some experimenting to land on one that felt right?

      No wood type alternates, just the one. I did submit 16 logos that were done in Illustrator, though.

      I really enjoy the collages you have on your Instagram. Are they primarily personal?

      Thank you! Yes, the collages and letterpress are both me needing to do something for my own creative output, not just for jobs. Of course, I always bring those explorations into my day work, though.

      What else are you working on right now? Anything coming up that you’re excited about?

      I’m starting to digitize some of my wood type to make some fonts. I’m also doing some Joker-themed letterpress prints, and at Elastic, I’m still working on Joker graphics for WB!

      Chrome logo

      A Google Chrome experiment has gone horribly wrong this week and ended up crashing browsers on thousands, if not more, enterprise networks for nearly two days.

      The issue first appeared on Wednesday, November 13. It didn’t impact all Chrome users, but only Chrome browsers running on Windows Server “terminal server” setups — a very common setup in enterprise networks

      Complaints flooded Google

      According to hundreds of reports, users said that Chrome tabs were going blank, all of a sudden, in what’s called a “White Screen of Death” (WSOD) error.

      The issue was no joke. System administrators at many companies reported that hundreds and thousands of employees couldn’t use Chrome to access the internet, as the active browser tab kept going blank while working.

      In tightly controlled enterprise environments, many employees didn’t have the option to change browsers and were left unable to do their jobs. Similarly, system administrators couldn’t just replace Chrome with another browser right away.

      “This has had a huge impact for all our Call Center agents and not being able to chat with our members,” someone with a Costco email address said in a bug report. “We spent the last day and a half trying to figure this out.”

      “Our organization with multiple large retail brands had 1000 call center agents and many IT people affected for 2 days. This had a very large financial impact,” said another user.

      “Like many others, this has had significant impact on our organization with our entire Operations (over 500 employees) working in a RDS environment with Google Chrome as the primary browser,” said another system administrator.

      “4000 impacted in my environment. Working on trying to fix it for 12 hours,” said another.

      “Medium sized call center for a local medical office lost a day and a half of work for 40-60 employees,” added another.

      “Same issue experienced, hundreds of users impacted – hours spent attempting to isolate the cause,” said another user.

      Hundreds of complaints poured in via Google’s support forum, Chrome bug tracker, and Reddit [1, 2]. One impacted sysadmin told ZDNet that they initially mistook the Chrome blank tabs as a sign of malware and reacted accordingly, starting network-wide security audits.

      Google ships a fix

      However, with time, the root cause of the bug was eventually found, and traced back to a feature called “WebContents Occlusion.”

      According to Google Chrome design document, this is an experimental feature that suspends Chrome tabs when users move other app windows on top of Chrome, treating the active Chrome tab as a background tab.


      Image: Google

      The feature, meant to improve Chrome’s resource usage when not in active use, had been under testing in Chrome Canary and Chrome Beta releases all year.

      However, this week, Google decided to test it in the main Stable release, so it could get more feedback on how it behaved.

      That it behaved badly is an understatement.

      “The experiment/flag has been on in beta for ~5 months,” said David Bienvenu, a Google Chrome engineer. “It was turned on for stable (e.g., M77, M78) via an experiment that was pushed to released Chrome Tuesday morning.”

      “Prior to that, it had been on for about 1% of M77 and M78 users for a month with no reports of issues, unfortunately,” he added.

      However, when rolled out to a broader audience — such as Windows users on terminal server setups — an unexpected bug occurred that instead of suspending Chrome tabs when users switched to another app, it unloaded the tab entirely, leaving a blank page behind.

      Users could refresh the Chrome tab to access their sites again, but in some cases, this also meant they lost previous work.

      The Chrome team said they pushed a new Chrome configuration file to all Chrome users and disabled the experiment.

      Chrome engineers operate a system called Finch that lets them push updated Chrome settings to active installs, such as enabling or disabling experimental flags.

      If the fix has not reached all impacted users, and they still have problems, they can disable the following two experimental flags by hand:




      An alternative method to fixing this is to start Google Chrome with the following command-line argument: –disable-backgrounding-occluded-windows

      Fix prompts more criticism

      However, fixing the problem actually made system administrators even angrier. Many didn’t know that Chrome engineers could run experiments on their tightly-controlled Chrome installations, let alone that Google engineers could just ship changes to everyone’s browsers without any prior approval.

      “Do you see the impact you created for thousands of us without any warning or explanation? We are not your test subjects,” said an angry sysadmin. “We are running professional services for multi million dollar programs. Do you understand how many hours of resources were wasted by your ‘experiment’?”

      “How many tens of thousands of dollars has this oops cost everyone? This is starting to look like a pretty massive mistake on Googles part,” added another disgruntled sysadmin.

      “We take great care in rolling our changes out in a very controlled manner to avoid this type of scenario and we spent the better part of yesterday trying to determine if an internal change had occurred in our environment without our knowledge. We did not realize this type of event could occur on Chrome unbeknownst to us. We are already discussing alternative options, none of them are great, but this is untenable,” said another, hinting at a browser change across their organization.

      Although it lasted just two days, this entire incident is panning out to be one of the Chrome team’s biggest bungles. Many impacted users demanded an official apology from Google, and by the looks of the financial impact it may have caused some companies, they are entitled to it.


      Instagram today announced it’s expanding its test of hidden likes to users around the world. This means that a significantly higher number of people will be logging into Instagram today and seeing their friends’ posts sans numbers.

      The test was run in Australia, Brazil, Canada, Ireland, Italy, Japan, and New Zealand earlier this year. Now it’s been expanded to the rest of the world. So if, all of a sudden, you can’t see the number of likes on another user’s post, it’s because you’ve been included in this test. The layout is otherwise unchanged.

      Starting today, we’re expanding our test of private like counts globally. If you’re in the test, you’ll no longer see the total number of likes and views on photos and videos posted to Feed unless they’re your own.

      — Instagram (@instagram) November 14, 2019

      The social media platform’s pivot away from the instant gratification of likes and views is an interesting experiment — how much do our fellow users respond to the number below our pictures more so than to the pictures themselves? Likes are intended to be a sign of approval from one user to another, not a reactive desire to join a crowd. Adam Mosseri, the head of Instagram, said during an interview at Wired25 that this was intended to make Instagram less of a cutthroat place: “The idea is to try to ‘depressurize’ Instagram, make it less of a competition and give people more space to focus on connecting with people that they love, things that inspire them.”

      Still, this won’t abate the sense of validation that comes with the likes on your own posts — and it’d be well nigh impossible for Instagram to ever eliminate that. I’d be curious to see if any popular influencers notice a significant change in their like counts one way or another. Instagram has also stated it’s trying to find other ways for content creators to keep their sponsorships that can sometimes be dependent on like counts.

      In addition, we understand that like counts are important for many creators, and we are actively thinking through ways for creators to communicate value to their partners.

      — Instagram (@instagram) November 14, 2019

      Some users aren’t exactly supportive of the idea: Nicki Minaj has already said she won’t post to Instagram because they’re “removing the likes” and said in a now-deleted tweetstorm that Instagram like counts help empower independent artists. Cardi B has also weighed in, saying comment likes are more harmful than likes themselves, as they allow people to reward harmful backchat to otherwise positive posts.

      Still, others have come out in support of the decision. Several influencers told Buzzfeed the potential improvement to users’ mental health is most important. Kim Kardashian West, someone who’s consistently been one of Instagram’s top ten most-followed users, has said the change could be “beneficial.” And when TNW ran its own poll, the majority of answers said hiding likes was a good thing.

      Instagram is hiding likes in 6 more countries, do you think that’s a good idea?

      Full story:

      — TNW (@thenextweb) July 18, 2019

      Read next:

      Google is rolling out RCS, text-messaging on steroids, in the US


      Hot Email Design Trends to Consider for 2020

      Email marketing is one of the most effective forms of web promotion, and definitely the most cost-effective form of digital marketing, with measurable and high ROI.

      The reason behind this is the fact that a well-crafted email can pretty much convince a person to do anything, with less effort than the rest of the marketing channels, and it can be used for any product, brand or service. This is why there are plenty of email marketing and marketing automation platforms, like Moosend, that offer marketing automation for agencies, SaaS platforms, high-end brands, retailers, real estate, any type of product or service one could ever imagine.

      By sending an email, you’re essentially knocking on someone’s door and asking them to hear you out. It can capture customers’ attention like nothing else, but only if your email is well-crafted and on top of current design trends. So, let’s go see what those are going to be for 2020.

      email design trendsIllustration by Justas Galaburda


      It may seem unrelated at first, but hyper-personalization (not simple personalization like we’re used to) has to do with everything email, and that includes email design.

      email design tips and trendsSource

      You see, we’re not talking about email segmentation like we know it at this point. Instead, we’re talking about emails that will appeal to each and every person in your audience and will serve a purpose that will be tailor-made and will feel personal. Hyper-personalization is achievable through the use of machine learning and AI technology that will be able to train itself by running through your data and detecting the patterns that are evident in that data.

      After that, the machine… Learns. By implementing the patterns, it can distinguish in the data. By learning, the machine enables itself to predict and suggest accordingly, making hyper-personalization possible. So, if responsive design is what got you the most email opens for 2019, or if a specific user is used to not clicking through an email whose dominant color is a pastel one, you’ll be able to recreate it. This will help you deliver results that will get clicked and will appeal to your audience with one-on-one content. And this will, ultimately, help you reach your KPIs in the end.


      Interactivity is one of the finest ways to draw attention and the more attention your email gets, the more fruitful your efforts at CRO. Your email will need to be something that gets you noticed, that helps the user engage with your brand, and that encourages them to go ahead and click around.

      email designSource

      You can use plenty of interactive elements, and there’s bound to be one that’s best for your niche and the reason why you’re sending your email campaign.

      For example, sliders will work best for those times when you want to display more than three products, whereas the scratch effect and flip cards will boost engagement when you want to go through with something like recommendations of similar products or discount codes and offers.

      There are a whole lot of other options as well, but the point is that interactive emails help engagement rates and “discovering” the content is way more fun than reading about it in the main body of an email.

      Just don’t overdo it, as it could get confusing and distracting in the end. You’ll see what I mean below.

      Thinking Small

      You don’t need too much content. Just the right amount can make the user be convinced and happy with what they’ve found and what they’re willing to do with your product or service. You need small pieces of content that will be memorable, like a punchline, something shareable.

      Ultimately, you need something your prospect will remember and that they won’t forget to interact with. Small videos, tidbits of information, bite-sized infographics, anything that will be compact enough, small enough, and also memorable enough is what you need.

      Why do you think flip cards are one of the interactive features mentioned above? They’re ideal for breaking content in two, making it appear smaller and helping out with making it memorable.

      Social Media Shares and Feedback

      In terms of engagement again, social media platforms are what can help you win this game. But not without leveraging email marketing.

      You see, you can have the best Twitter header or the most wonderful idea that will help you reach out to social media influencers, and it still won’t matter that much if your email marketing game is not strong. Easily digestible pieces of content can be used to your absolute advantage.

      Include a button that will encourage social sharing. It can either be for one small little quotes I mentioned before or a large one like the ones we’ve come to know and love:

      email design trends examples

      You can create unique buttons to share content, purchases, and so on and so forth. Just make sure to make your button discreet enough for your users not to feel intimidated by it, and you’re good.

      And since email marketing is way more than simply promotions and payment receipts, you can design a beautiful email campaign that will have customer feedback as to its main goal.

      Feedback is a win-win for both your brand and the customers. On the one hand, you’ll know better than any AI platform can show what your prospects and customers need; on the other hand, you’ll create the type of brand that actually cares. For example, in this email, I’ve got the option of tracking and using the hashtag, and the customer service email is right there for me in case I need it.

      email design examples

      However, there’s a fancier way to ask for feedback, and it will rule email design for (at least) 2020. Think about incorporating interactive elements here as well, like check-boxes or quizzes. Whether it has to do with your Net Promoter Score or general feedback of what your users would like to see on your platform, interactive quizzes are a great way to craft a memorable email with ease.

      Bold Fonts, Emoji-ed Subject Lines and Muted Colors

      This has been going around in 2019, but we will see a great rise in those three elements in 2020. Let’s begin with the emojis in email subject lines:

      emoji email design

      This image is from my personal inbox, and I dare say it made me click. Emojis are what carry the “feeling” of a message, and it is the equivalent of the use of colors in the main body of an email since it helps convey the tone. Not to mention that messages with an emoji in their subject line are what increases open rates by 56%, according to a report by Experian.

      But let’s move on to the fonts. Minimal is making a comeback, and the fonts are expected to be bold, simple and no-fuss, allowing the message to shine through without it being too much:

      email design trends

      Netflix used this very well, as well as the GIF trend and the small, digestible bits of information. You can see that the font is bold, it’s not too much, and the message is clear: Series premiere is on September 12th, go check the trailer.

      This is all the information one may need when it comes to a series anyway, and Netflix didn’t bother with odd details like a summary. That’s all included in the trailer anyway.

      And as far as the colors are concerned. There will be a rise in the use of muted colors. Intense colors are now a thing of the past, and the softer, darker colors that are easier on the eyes will see an increase in use.

      Two Trends That Are Here To Stay

      Of course, I’m talking about User Generated Content (UGC) and video email marketing. Those two trends are guaranteed to stay beyond 2020.

      Let’s begin with video email marketing. Here’s an example:

      email design

      We’re talking about a Halloween makeup tutorial I received in my inbox about a couple of weeks ago. Clicking on this got me to a dedicated page that showed me the tutorial and the products used. And this gave the brand a better chance at conversion:

      email design statsSource

      I saw the products in action, and this was all that I needed in order to be convinced to buy something off an email.

      And as far as UGC is concerned, this is a very underrated (so far) but highly effective technique that can drive more engagement.

      After all, who doesn’t want to be a marketer, and who doesn’t want to see their content featured in a newsletter? Especially now, with the rapid rise of social media platforms that are centered around UGC, like Instagram or VSCO.

      UGC practically leverages the good old word-of-mouth marketing that all consumers trust to this day-hence the rise of influencers-and can work as referral marketing does. Users create content for a brand. The brand trusts users enough to share it without damaging its own image. The users become ambassadors, engage with the brand, and convince their friends to buy as well, and the prospects turn into customers easily, seamlessly, and, most importantly, at a low cost. In fact, 92% of buyers trust in recommendations from people they know.

      By incorporating that fact into the design of an email, you’re shooting for maximum engagement, conversions, and shares-who would miss out on a chance to show that their content was actually taken in by their favorite brand?


      All in all, email design will appear to be simpler and no-fuss in 2020, but with a lot more complex, interactive elements that will aim for engagement.

      This means that we’ll see a huge rise in personalization that will take over email design as well.

      Author: the guest post by Téa, a content writer working for email marketing software company Moosend and an obsessive writer in general. 

      You might also like the review of hot UI design trends, tips on choosing color scheme for user interface and effective design for CTA elements


      Featured Chapter

      Third Parties

      The open web is vast, linkable, and interoperable by design. The ability to grab someone else’s complex library and use it on your site with a single or

      As you can see, each option contains parts from all functionalities. There is no clear separation between them and that makes the code hard to read, especially if you are not the person who wrote it and you are looking at it for the first time. It is very hard to find which method is used by which functionality.

      Let’s look at it again but identify the logical concerns as comments. Those would be:

      • Activity tracker
      • Reload blocker
      • Authentication check
      • Locale
      • Event Bus registration

      See how hard it is to untangle all of those? 🙂

      Now imagine you need to make a change in one functionality (e.g. activity tracking logic). Not only do you need to know which elements are related to that logic, but even when you know, you still need to jump up and down between different component options.

      Let’s use the Composition API to separate the code by logical concerns. To do that we create a single function for each logic related to a specific functionality. This is what we call a composition function.

      // Activity tracking logic
      function useActivityTracker() {
        const userActivityTimeout = ref(null)
        const lastUserActivityAt = ref(null)
        function activateActivityTracker() {...}
        function deactivateActivityTracker() {...}
        function resetActivityTimeout() {...}
        function userActivityThrottler() {...}
        onBeforeMount(() => {
        onUnmounted(() => {
      // Reload blocking logic
      function useReloadBlocker(context) {
        const reloadCount = ref(null)
        function blockReload() {...}
        function setReloadCount() {...}
        onMounted(() => {
      // Locale logic
      function useLocale(context) {
        async function loadLocaleAsync(selectedLocale) {...}
        function setI18nLocale(locale) {...}
        watch(() => {
          const locale = ...
        // No need for a 'created' hook, all logic that runs in setup function is placed between beforeCreate and created hooks
        const initialLocale = localStorage.getItem('locale')
      // Event bus listener registration
      import EventBus from '@/event-bus'
      function useEventBusListener(eventName, handler) {
        onMounted(() => EventBus.$on(eventName, handler))
        onUnmounted(() => EventBus.$off(eventName, handler))

      As you can see, we can declare reactive data (ref / reactive), computed props, methods (plain functions), watchers (watch) and lifecycle hooks (onMounted / onUnmounted). Basically everything you normally use in a component.

      We have two options when it comes to where to keep the code. We can leave it inside the component or extract it into a separate file. Since the Composition API is not officially there yet, there are no best practices or rules on how to deal with it. The way I see it, if the logic is tightly coupled to a specific component (i.e. it won’t be reused anywhere else), and it can’t live without the component itself, I suggest leaving it within the component. On the flip side, if it is general functionality that will likely be reused, I suggest extracting it to a separate file. However, if we want to keep it in a separate file, we need to remember to export the function from the file and import it in our component.

      This is how our component will look like using newly created composition functions:


      This gives us a single function for each logical concern. If we want to use any specific concern, we need to call the related composition function in the new setup function.

      Imagine again that you need to make some change in activity tracking logic. Everything related to that functionality lives in the useActivityTracker function. Now you instantly know where to look and jump to the right place to see all the related pieces of code. Beautiful!

      Extracting reusable pieces of code

      In our case, the Event Bus listener registration looks like a piece of code we can use in any component that listens to events on Event Bus.

      As mentioned before, we can keep the logic related to a specific functionality in a separate file. Let’s move our Event Bus listener setup into a separate file.

      // composables/useEventBusListener.js
      import EventBus from '@/event-bus'
      export function useEventBusListener(eventName, handler) {
        onMounted(() => EventBus.$on(eventName, handler))
        onUnmounted(() => EventBus.$off(eventName, handler))

      To use it in a component, we need to make sure we export our function (named or default) and import it in a component.


      That’s it! We can now use that in any component we need.

      Wrapping up

      There is an ongoing discussion about the Composition API. This post has no intention to promote any side of the discussion. It is more about showing when it might be useful and in what cases it brings additional value.

      I think it is always easier to understand the concept on a real life example like above. There are more use cases and, the more you use the new API, the more patterns you will see. This post is merely a few basic patterns to get your started.

      Let’s go again through the presented use cases and see where the Composition API can be useful:

      General features that can live on its own without tight coupling with any specific component

      • All logic related to a specific feature in one file
      • Keep it in @/composables/*.js and import it in components
      • Examples: Activity Tracker, Reload Blocker, and Locale

      Reusable features that are used in multiple components

      • All logic related to a specific feature in one file
      • Keep it in @/composables/*.js and import in components
      • Examples: Event Bus listener registration, window event registration, common animation logic, common library usage

      Code organization within component

      • All logic related to a specific feature in one function
      • Keep the code in a composition function within the component
      • The code related to the same logical concern is in the same place (i.e. there’s no need to jump between data, computed, methods, lifecycle hooks, etc.)

      Remember: This is all a work-in-progress!

      The Vue Composition API is currently at work in progress stage and is subject to future changes. Nothing mentioned in the examples above is sure, and both syntax and use cases may change. It is intended to be shipped with Vue version 3.0. In the meantime, you can check out view-use-web for a collection of composition functions that are expected to be included in Vue 3 but can be used with the Composition API in Vue 2.

      If you want to experiment with the new API you can use the @vue/composition library.


      Quotient announced last week that it’s buying Ubimo, an Israeli location intelligence company. Ubimo translates location data and history into audience segmentation, activation and campaign attribution, connecting digital campaigns to in-store results.

      DSP was the attraction. Quotient and Ubimo had worked together for several years, utilizing the latter’s data for Quotient customer campaigns. But the primary rationale behind the acquisition was Ubimo’s DSP, according to Jason Young is Quotient’s Chief Marketing & Media Officer.

      Quotient intends to offer its CPG brand and agency clients a self-service DSP. “The acquisition will accelerate Quotient’s product development of a self-service platform, where marketers can plan, buy, and optimize media campaigns directly from an automated platform,” according to the Quotient’s press materials. evolved. Quotient began life as in 1998, which it still owns and operates. In 2017, the company acquired mobile marketing company Crisp Media for roughly $53 million.

      Quotient distributes digital coupons through its network, which includes and a wide range of retailers and grocery store properties. The company also makes programmatic media buys on behalf of customers.

      Quotient influencer marketing campaigns (2019)

      Through loyalty cards and point-of-sale (POS) redemption data, Quotient is able to deliver closed loop reporting as well. It also uses POS data for campaign targeting.

      For retailers, Quotient offers a range of ad and media solutions. For example, it enables retailers to sell ad space on their sites and apps and distribute digital circulars on social media. It also operates an influencer marketing platform.

      Utility of location data. Brands that don’t have access to coupon or loyalty card POS data, have increasingly used store visitation as an attribution metric. That’s one of the primary capabilities Ubimo offers its customers.

      But Ubimo also uses location data, which can be combined with other data sets, to enable highly specific audience segmentation and targeting. Quotient will bring Ubimo’s technology into its platform and combine its own shopper data with Ubimo’s location data and analytics, which Quotient “expects to meaningfully improve campaign performance for customers.”

      With Ubimo’s assets and customer relationships, Quotient intends to expand beyond its traditional CPG and retail customer/partner base and move into “adjacent markets, such as Out-of-Home.”

      Why we should care. Quotient’s main reason for buying Ubimo was the company’s DSP. But from a larger market perspective, the deal shows the increasingly mainstream use of location intelligence, both for targeting and attribution. It also shows a growing recognition of the power of location data for merchants and brands — unless they sell exclusively online — to maximize targeting effectiveness and to demonstrate the real-world impact of digital campaigns.

      About The Author

      Greg Sterling is a Contributing Editor at Search Engine Land. He writes about the connections between digital and offline commerce. He previously held leadership roles at LSA, The Kelsey Group and TechTV. Follow him Twitter or find him on LinkedIn.


      close icon

      Get blog posts to your inbox.

      CockroachDB is an OLTP database, specialized for serving high-throughput queries that read or write a small number of rows. As we gained more usage, we found that customers weren’t getting the performance they expected from analytic queries that read a lot of rows, like large scans, joins, or aggregations. In April 2018, we started to seriously investigate how to improve the performance of these types of queries in CockroachDB, and began working on a new SQL execution engine. In this blog post, we use example code to discuss how we built the new engine and why it results in up to a 4x speed improvement on an industry-standard benchmark.

      OLTP databases, including CockroachDB, store data in contiguous rows on disk and process queries a row of data at a time. This pattern is optimal for serving small queries with high throughput and low latency, since the data in the rows are stored contiguously, making it more efficient to access multiple columns from the same row. Modern OLAP databases, on the other hand, typically are better at serving large queries, and tend to store data in contiguous columns and operate on these columns using a concept called vectorized execution. Using vectorized processing in an execution engine makes more efficient use of modern CPUs by changing the data orientation (from rows to columns) to get more out of the CPU cache and deep instruction pipelines by operating on batches of data at a time. 

      In our research into vectorized execution, we came across MonetDB/X100: Hyper-Pipelining Query Execution, a paper that outlines the performance deficiencies of the row-at-a-time Volcano execution model that CockroachDB’s original execution engine was built on. When executing queries on a large number of rows, the row-oriented execution engine pays a high cost in interpretation and evaluation overhead per tuple and doesn’t take full advantage of the efficiencies of modern CPUs. Given the key-value storage architecture of CockroachDB, we knew we couldn’t store data in columnar format, but we wondered if converting rows to batches of columnar data after reading them from disk, and then feeding those batches into a vectorized execution engine, would improve performance enough to justify building and maintaining a new execution engine.

      To quantify the performance improvements, and to test the ideas laid out in the paper, we built a vectorized execution engine prototype, which yielded some impressive results. In this tutorial-style blog post, we take a closer look at what these performance improvements look like in practice. We also demonstrate why and how we use code generation to ease the maintenance burden of the vectorized execution engine. We take an example query, analyze its performance in a toy, row-at-a-time execution engine, and then explore and implement improvements inspired by the ideas proposed in the MonetDB/x100 paper. The code referenced in this post resides in, so feel free to look at, modify, and/or run the code and benchmarks while you follow along.

      What’s in a SQL operator?

      To provide some context, let’s look at how CockroachDB executes a simple query, SELECT price * 0.8 FROM inventory, issued by a fictional retail customer that wants to compute a discounted price for each item in her inventory. Regardless of which execution engine is used, this query is parsed, converted into an abstract syntax tree (AST), optimized, and then executed. The execution, whether distributed amongst all nodes in a cluster, or executed locally, can be thought of as a chain of data manipulations that each have a specific role, which we call operators. In this example query, the execution flow would look like this:

      You can generate a diagram of the physical plan by executing EXPLAIN (DISTSQL)on the query. As you can see, the execution flow for this query is relatively simple. The TableReader operator reads rows from the inventory table and then executes a post-processing render expression, in this case the multiplication by a constant float. Let’s focus on the render expression, since it’s the part of the flow that is doing the most work.

      Here’s the code that executes this render expression in the original, row-oriented execution engine used in CockroachDB (some code is omitted here for simplicity):

      func (expr *BinaryExpr) Eval(ctx *EvalContext) (Datum, error) {
      	left, err := expr.Left.(TypedExpr).Eval(ctx)
      	if err != nil {
      		return nil, err
      	right, err := expr.Right.(TypedExpr).Eval(ctx)
      	if err != nil {
      		return nil, err
      	return expr.fn.Fn(ctx, left, right)

      The left and right side of the binary expression (BinaryExpr) are both values wrapped in a Datum interface. The BinaryExpr calls expr.fn.Fn with both of these as arguments. In our example, the inventory table has a FLOAT price column, so the Fnis:

      Fn: func(_ *EvalContext, left Datum, right Datum) (Datum, error) {
              return NewDFloat(*left.(*DFloat) * *right.(*DFloat)), nil

      In order to perform the multiplication, the Datum values need to be converted to the expected type. If, instead, we created a price column of type DECIMAL, we would cast 0.8 to a DECIMAL and then construct a BinaryExpr with a different Fn specialized for multiplying DECIMALs.

      We now have specialized code for multiplying each type, but the TableReader doesn’t need to worry about it. Before executing the query, the database creates a query plan that specifies the correct Fn for the type that we are working with. This simplifies the code, since we only need to write specialized code as an implementation of an interface. It also makes the code less efficient, as each time we multiply two values together, we need to dynamically resolve which Fn to call, cast the interface values to concrete type values that we can work with, and then convert the result back to an interface value.

      Benchmarking a simple operator

      How expensive is this casting, really? To find the answer to this question, let’s take a similar but simpler toy example:

      type Datum interface{}
      // Int implements the Datum interface.
      type Int struct {
      func mulIntDatums(a Datum, b Datum) Datum {
      	aInt := a.(Int).int64
      	bInt := b.(Int).int64
      	return Int{int64: aInt * bInt}
      // ...
      func (m mulOperator) next() []Datum {
      	row :=
      	if row == nil {
      		return nil
      	for _, c := range m.columnsToMultiply {
      		row[c] = m.fn(row[c], m.arg)
      	return row

      This is a type-agnostic single operator that can handle multiplication of an arbitrary number of columns by a constant argument. Think of the input as returning the rows from the table. To add support for DECIMALs, we can simply add another function that multiplies DECIMALs with a mulFn signature.

      We can measure the performance of this code by writing a benchmark (see it in our repo). This will give us an idea of how fast we can multiply a large number of Int rows by a constant argument. The benchstat tool tells us that it takes around 760 microseconds to do this:

      $ go test -bench “BenchmarkRowBasedInterface$” -count 10 > tmp && benchstat tmp && rm tmp
      name               	time/op
      RowBasedInterface-12	760µs ±15%

      Because we have nothing to compare the performance against at this point, we don’t know if this is slow or not.

      We’ll use a “speed of light” benchmark to get a better relative sense of this program’s speed. A “speed of light” benchmark measures the performance of the minimum necessary work to perform an operation. In this case, what we really are doing is multiplying 65,536 int64s by 2. The result of running this benchmark is:

      $ go test -bench "SpeedOfLight" -count 10 > tmp && benchstat tmp && rm tmp
      name               	time/op
      SpeedOfLight-12    	19.0µs ± 6%

      This simple implementation is about 40x faster than our earlier operator!

      To try to figure out what’s going on, let’s run a CPU profile on BenchmarkRowBasedInterface and focus on the mulOperator. We can use the -o option to obtain an executable, which will let us disassemble the function with the disasm command in pprof. As we will see below, this command will give us the assembly that our Go source code compiles into, along with approximate CPU times for each instruction. First, let’s use the top and list commands to find the slow parts of the code.

      $ go test -bench "BenchmarkRowBasedInterface$" -cpuprofile cpu.out -o row_based_interface
      $ go tool pprof ./row_based_interface cpu.out
      (pprof) focus=mulOperator
      (pprof) top
      Active filters:
      Showing nodes accounting for 1.99s, 88.05% of 2.26s total
      Dropped 15 nodes (cum <= 0.01s)
      Showing top 10 nodes out of 12
        	flat  flat%   sum%    	cum   cum%
       	0.93s 41.15% 41.15%  	2.03s 89.82%  _~/scratch/
       	0.47s 20.80% 61.95%  	0.73s 32.30%  _~/scratch/vecdeepdive.mulIntDatums
       	0.36s 15.93% 77.88%  	0.36s 15.93%  _~/scratch/vecdeepdive.(*tableReader).next
       	0.16s  7.08% 84.96%  	0.26s 11.50%  runtime.convT64
       	0.07s  3.10% 88.05%  	0.10s  4.42%  runtime.mallocgc
           	0 	0% 88.05%  	2.03s 89.82%  _~/scratch/vecdeepdive.BenchmarkRowBasedInterface
           	0 	0% 88.05%  	0.03s  1.33%  runtime.(*mcache).nextFree
           	0 	0% 88.05%  	0.02s  0.88%  runtime.(*mcache).refill
           	0 	0% 88.05%  	0.02s  0.88%  runtime.(*mcentral).cacheSpan
           	0 	0% 88.05%  	0.02s  0.88%  runtime.(*mcentral).grow
      (pprof) list next
      ROUTINE ======================== _~/scratch/ in ~/scratch/vecdeepdive/row_based_interface.go
       	930ms  	2.03s (flat, cum) 89.82% of Total
           	.      	. 	39:
        	60ms   	60ms 	40:func (m mulOperator) next() []Datum {
       	120ms  	480ms 	41:    row :=
        	50ms   	50ms 	42:    if row == nil {
           	.      	. 	43:   	 return nil
           	.      	. 	44:    }
       	250ms  	250ms 	45:    for _, c := range m.columnsToMultiply {
       	420ms  	1.16s 	46:   	 row[c] = m.fn(row[c], m.arg)
           	.      	. 	47:    }
        	30ms   	30ms 	48:    return row
           	.      	. 	49:}
           	.      	. 	50:

      We can see that out of 2030ms, the mulOperator spends 480ms getting rows from the input, and 1160ms performing the multiplication. 420ms of those are spent in next before even calling m.fn (the left column is the flat time, i.e., time spent on that line, while the right column is the cumulative time, which also includes the time spent in the function called on that line). Since it seems like the majority of time is spent multiplying arguments, let’s take a closer look at mulIntDatums:

      (pprof) list mulIntDatums
      Total: 2.26s
      ROUTINE ======================== _~/scratch/vecdeepdive.mulIntDatums in ~/scratch/vecdeepdive/row_based_interface.go
       	470ms  	730ms (flat, cum) 32.30% of Total
           	.      	. 	10:
        	70ms   	70ms 	11:func mulIntDatums(a Datum, b Datum) Datum {
        	20ms   	20ms 	12:    aInt := a.(Int).int64
        	90ms   	90ms 	13:    bInt := b.(Int).int64
       	290ms  	550ms 	14:    return Int{int64: aInt * bInt}
           	.      	. 	15:}

      As expected, the majority of the time spent in mulIntDatums is on the multiplication line. Let’s take a closer look at what’s going on under the hood here by using the disasm (disassemble) command (some instructions are omitted):

      (pprof) disasm mulIntDatums
           	.      	.	1173491: MOVQ 0x28(SP), AX                   	;row_based_interface.go:12
        	20ms   	20ms	1173496: LEAQ type.* 228800(SB), CX          	;_~/scratch/vecdeepdive.mulIntDatums row_based_interface.go:12
           	.      	.	117349d: CMPQ CX, AX                         	;row_based_interface.go:12
           	.      	.	11734a0: JNE 0x1173505
           	.      	.	11734a2: MOVQ 0x30(SP), AX
           	.      	.	11734a7: MOVQ 0(AX), AX
        	90ms   	90ms	11734aa: MOVQ 0x38(SP), DX                   	;_~/scratch/vecdeepdive.mulIntDatums row_based_interface.go:13
           	.      	.	11734af: CMPQ CX, DX                         	;row_based_interface.go:13
           	.      	.	11734b2: JNE 0x11734e9
           	.      	.	11734b4: MOVQ 0x40(SP), CX
           	.      	.	11734b9: MOVQ 0(CX), CX
        	70ms   	70ms	11734bc: IMULQ CX, AX                        	;_~/scratch/vecdeepdive.mulIntDatums row_based_interface.go:14
        	60ms   	60ms	11734c0: MOVQ AX, 0(SP)
        	90ms  	350ms	11734c4: CALL runtime.convT64(SB)

      Surprisingly, only 70ms is spent executing the IMULQ instruction, which is the instruction that ultimately performs the multiplication. The majority of the time is spent calling convT64, which is a Go runtime package function that is used (in this case) to convert the Int type to the Datum interface.

      The disassembled view of the functions suggests that most of the time spent multiplying values is converting the arguments from Datums to Ints and the result from an Int back to a Datum.

      Using concrete types

      To avoid the overhead of these conversions, we would need to work with concrete types. This is a tough spot to be in, since the execution engine we’ve been discussing uses interfaces to be type-agnostic. Without using interfaces, each operator would need to have knowledge about the type it is working with. In other words, we would need to implement an operator for each type.

      Luckily, we have the prior research of the MonetDB team to guide us. Given their work, we knew that the pain caused by removing the interfaces would be justified by huge potential performance improvements.

      Later, we will take a look at how we got away with using concretely-typed operators to avoid typecasts for performance reasons, without sacrificing all of the maintainability that comes from using Go’s type-agnostic interfaces. First, let’s look at what will replace the Datum interface:

      type T int
      const (
          // Int64Type is a value of type int64
          Int64Type T = iota
          // Float64Type is a value of type float64
      type TypedDatum struct {
          t T
          int64 int64
          float64 float64
      type TypedOperator interface {
          next() []TypedDatum

      A Datum now has a field for each possible type it may contain, rather than having separate interface implementations for each type. There is an additional enum field that serves as a type marker, so that when we do need to, we can inspect a type of a Datum without doing any expensive type assertions. This type uses extra memory due to having a field for each type, even though only one of them will be used at a time. This could lead to CPU cache inefficiencies, but for this section we will skip over those concerns and focus on dealing with the interface interpretation overhead. In a later section, we’ll discuss the inefficiency more and address it.

      The mulInt64Operator will now look like this:

      func (m mulInt64Operator) next() []TypedDatum {
          row :=
          if row == nil {
         	 return nil
          for _, c := range m.columnsToMultiply {
         	 row[c].int64*= m.arg
          return row

      Note that the multiplication is now in place. Running the benchmark against this new version shows almost a 2x speed up.

      $ go test -bench "BenchmarkRowBasedTyped$" -count 10 > tmp && benchstat tmp && rm tmp
      name               	time/op
      RowBasedTyped-12    	390µs ± 8%

      However, now that we are writing specialized operators for each type, the amount of code we have to write has nearly doubled, and even worse, the code violates the maintainability principle of staying DRY (Don’t Repeat Yourself). The situation seems even worse if we consider that in a real database engine, there would be far more than two types to support. If someone were to slightly change the multiplication functionality (for example, adding overflow handling), they would have to rewrite every single operator, which is tedious and error-prone. The more types, the more work one has to do to update code.

      Generating code with templates

      Thankfully, there is a tool we can use to reduce this burden and keep the good performance characteristics of working with concrete types. The Go templating engine allows us to write a code template that, with a bit of work, we can trick our editor into treating as a regular Go file. We have to use the templating engine because the version of Go we are currently using does not have support for generic types. Templating the multiplication operators would look like this (full template code is in row_based_typed_tmpl.go):

      // {{/*
      type _GOTYPE interface{}
      // _MULFN assigns the result of the multiplication of the first and second
      // operand to the first operand.
      func _MULFN(_ TypedDatum, _ interface{}) {
      	panic("do not call from non-templated code")
      // */}}
      // {{ range .}}
      type mul_TYPEOperator struct {
      	input             TypedOperator
      	arg               _GOTYPE
      	columnsToMultiply []int
      func (m mul_TYPEOperator) next() []TypedDatum {
      	row :=
      	if row == nil {
      		return nil
      	for _, c := range m.columnsToMultiply {
      		_MULFN(row[c], m.arg)
      	return row
      // {{ end }}

      The accompanying code to generate the full row_based_typed.gen.go file is located in row_based_type_gen.go. This code is executed by running go run . to run the main() function in generate.go (omitted here for conciseness). The generator will iterate over a slice and fill the template in with specific information for each type. Note that there is a prior step that is necessary in order to consider the row_based_typed_tmpl.go file valid Go. In the template, we use tokens that are valid Go (e.g. _GOTYPE and _MULFN). These tokens’ declarations are wrapped in template comments and removed in the final generated file.

      For example, the multiplication function (_MULFN) is converted to a method call with the same arguments:

      // Replace all functions.
      mulFnRe := regexp.MustCompile(`_MULFN((.*),(.*))`)
      s = mulFnRe.ReplaceAllString(s, `{{ .MulFn "$1" "$2" }}`)

      MulFn is called when executing the template, and then returns the Go code to perform the multiplication according to type-specific information. Take a look at the final generated code in row_based_typed.gen.go.

      The templating approach we took has some rough edges, and certainly is not a very flexible implementation. Nonetheless, it is a critical part of the real vectorized execution engine that we built in CockroachDB, and it was simple enough to build without getting sidetracked by creating a robust domain-specific language. Now, if we want to add functionality or fix a bug, we can modify the template once and regenerate the code for changes to all operators. Now that the code is a little more manageable and extensible, let’s try to improve the performance further.

      NOTE: To make the code in the rest of this blog post easier to read, we won’t use code generation for the following operator rewrites.

      Batching expensive calls

      Repeating our benchmarking process from before shows us some useful next steps.

      $ go test -bench "BenchmarkRowBasedTyped$" -cpuprofile cpu.out -o row_typed_bench
      $ go tool pprof ./row_typed_bench cpu.out
      (pprof) list next
      ROUTINE ======================== _~/scratch/ in ~/scratch/vecdeepdive/row_based_typed.gen.go
       	1.26s  	1.92s (flat, cum) 85.71% of Total
           	.      	.  	8:    input         	TypedOperator
           	.      	.  	9:    arg           	int64
           	.      	. 	10:    columnsToMultiply []int
           	.      	. 	11:}
           	.      	. 	12:
       	180ms  	180ms 	13:func (m mulInt64Operator) next() []TypedDatum {
       	170ms  	830ms 	14:    row :=
           	.      	. 	15:    if row == nil {
           	.      	. 	16:   	 return nil
           	.      	. 	17:    }
       	330ms  	330ms 	18:    for _, c := range m.columnsToMultiply {
       	500ms  	500ms 	19:   	 row[c].int64*= m.arg
           	.      	. 	20:    }
        	80ms   	80ms 	21:    return row
           	.      	. 	22:}

      This part of the profile shows that approximately half of the time spent in the function is spent calling (see line 13 above). This isn’t surprising if we look at our implementation of (*typedTableReader).next(); it’s a lot of code just for advancing to the next element in a slice. We can’t optimize too much about the typedTableReader, since we need to preserve the ability for it to be chained to any other SQL operator that we may implement. But there is another important optimization that we can do: instead of calling the next function once for each row, we can get back a batch of rows and operate on all of them at once, without changing too much about (*typedTableReader).next. We can’t just get all the rows at once, because some queries might result in a huge dataset that won’t fit in memory, but we can pick a reasonably large batch size.

      With this optimization, we have operators like the ones below. Once again, the full code for this new version is omitted, since there’s a lot of boilerplate changes. Full code examples can be found in row_based_typed_batch.go.

      type mulInt64BatchOperator struct {
          input         	TypedBatchOperator
          arg           	int64
          columnsToMultiply []int
      func (m mulInt64BatchOperator) next() [][]TypedDatum {
          rows :=
          if rows == nil {
         	 return nil
          for _, row := range rows {
         	 for _, c := range m.columnsToMultiply {
         		 row[c] = TypedDatum{t: Int64Type, int64: row[c].int64 * m.arg}
          return rows
      type typedBatchTableReader struct {
          curIdx int
          rows   [][]TypedDatum
      func (t *typedBatchTableReader) next() [][]TypedDatum {
          if t.curIdx >= len(t.rows) {
         	 return nil
          endIdx := t.curIdx   batchSize
          if endIdx > len(t.rows) {
         	 endIdx = len(t.rows)
          retRows := t.rows[t.curIdx:endIdx]
          t.curIdx = endIdx
          return retRows

      With this batching change, the benchmarks run nearly 3x faster (and 5.5x faster than the original implementation):

      $ go test -bench "BenchmarkRowBasedTypedBatch$" -count 10 > tmp && benchstat tmp && rm tmp
      name               	time/op
      RowBasedTypedBatch-12   137µs ±77%

      Column-oriented Data

      But we are still a long ways away from getting close to our “speed of light” performance of 19 microseconds per operation. Does the new profile give us more clues?

      $ go test -bench "BenchmarkRowBasedTypedBatch" -cpuprofile cpu.out -o row_typed_batch_bench
      $ go tool pprof ./row_typed_batch_bench cpu.out
      (pprof) list next
      Total: 990ms
      ROUTINE ======================== _~/scratch/ in ~/scratch/vecdeepdive/row_based_typed_batch.go
       	950ms  	950ms (flat, cum) 95.96% of Total
           	.      	. 	15:func (m mulInt64BatchOperator) next() [][]TypedDatum {
           	.      	. 	16:    rows :=
           	.      	. 	17:    if rows == nil {
           	.      	. 	18:   	 return nil
           	.      	. 	19:    }
       	210ms  	210ms 	20:    for _, row := range rows {
       	300ms  	300ms 	21:   	 for _, c := range m.columnsToMultiply {
       	440ms  	440ms 	22:   		 row[c] = TypedDatum{t: Int64Type, int64: row[c].int64 * m.arg}
           	.      	. 	23:   	 }
           	.      	. 	24:    }
           	.      	. 	25:    return rows
           	.      	. 	26:}

      Now the time calling (*typedBatchTableReader).next barely registers in the profile! That is much better. The profile shows that lines 20-22 is probably the best place to focus our efforts next. These lines are where well above 95% of the time is spent. That is partially a good sign, because these lines are implementing the core logic of our operator.

      However, there certainly is still room for improvement. Approximately half of the time spent in these three lines is just in iterating through the loops, and not in the loop body itself. If we think about the sizes of the loops, then this starts to become more clear. The length of the rows batch is 1,024, but the length of columnsToMultiply is just 1. Since the rows loop is the outer loop, this means that we are setting up this tiny inner loop — initializing a counter, incrementing it, and checking the boundary condition — 1,024 times! We could avoid all that repeated work simply by changing the order of the two loops.

      Although we won’t go into a full exploration of CPU architecture in this post, there are two important concepts that come into play when changing the loop order: branch prediction and pipelining. In order to speed up execution, CPUs use a technique called pipelining to begin executing the next instruction before the preceding one is completed. This works well in the case of sequential code, but whenever there are conditional branches, the CPU cannot identify with certainty what the next instruction after the branch will be. However, it can make a guess as to which branch will be followed. If the CPU guesses incorrectly, the work that the CPU has already performed to begin evaluating the next instruction will go to waste. Modern CPUs are able to make predictions based on static code analysis, and even the results of previous evaluations of the same branch.

      Changing the order of the loops comes with another benefit. Since the outer loop will now tell us which column to operate on, we can load all the data for that column at once, and store it in memory in one contiguous slice. A critical component of modern CPU architecture is the cache subsystem. In order to avoid loading data from main memory too often, which is a relatively slow operation, CPUs have layers of caches that provide fast access to frequently used pieces of data, and they can also prefetch data into these caches if the access pattern is predictable. In the row based example, we would load all the data for each row, which would include columns that were not at all affected by the operator, so not as much relevant data would fit into the CPU cache. Orienting the data we are going to operate on by column provides a CPU with exactly the predictability and dense memory-packing that it needs to make ideal use of its caches. 

      For a fuller treatment of pipelining, branch prediction, and CPU caches see Dan Luu’s branch prediction talk notes, his CPU cache blog post, or Dave Cheney’s notes from his High Performance Go Workshop.

      The code below shows how we could make the loop and data orientation changes described above, and also define a few new types at the same time to make the code easier to work with.

      type vector interface {
          // Type returns the type of data stored in this vector.
          Type() T
          // Int64 returns an int64 slice.
          Int64() []int64
          // Float64 returns a float64 slice.
          Float64() []float64
      type colBatch struct {
          size int
          vecs []vector
      func (m mulInt64ColOperator) next() colBatch {
          batch :=
          if batch.size == 0 {
         	 return batch
          for _, c := range m.columnsToMultiply {
         	 vec := batch.vecs[c].Int64()
         	 for i := range vec {
         		 vec[i] = vec[i] * m.arg
          return batch

      The reason we introduced the new vector type is so that we could have one struct that could represent a batch of data of any type. The struct has a slice field for each type, but only one of these slices will ever be non-nil. You may have noticed that we have now re-introduced some interface conversion, but the performance price we pay for it is now amortized thanks to batching. Let’s take a look at the benchmark now.

      $ go test -bench "BenchmarkColBasedTyped" -count 10 > tmp && benchstat tmp && rm tmp
      name               	time/op
      ColBasedTyped-12   	38.2µs ±24%

      This is another nearly ~3.5x improvement, and a ~20x improvement over the original row-at-a-time version! Our speed of light benchmark is still about 2x faster than this latest version, since there is overhead in reading each batch and navigating to the columns on which to operate. For the purposes of this post, we will stop our optimization efforts here, but we are always looking for ways to make our real vectorized engine faster.


      By analyzing the profiles of our toy execution engine’s code and employing the ideas proposed in the MonetDB/x100 paper, we were able to identify performance problems and implement solutions that improved the performance of multiplying 65,536 rows by a factor of 20x. We also used code generation to write templated code that is then generated into specific implementations for each concrete type. 

      In CockroachDB, we incorporated all of the changes presented in this blog post into our vectorized execution engine. This resulted in improving the CPU time of our own microbenchmarks by up to 70x, and the end-to-end latency of some queries in the industry-standard TPC-H benchmark by as much as 4x. The end-to-end latency improvement we achieved is a lot smaller than the improvement achieved in our toy example, but note that we only focused on improving the in-memory execution of a query in this blog post. When running TPC-H queries on CockroachDB, data needs to be read from disk in its original row-oriented format before processing, which will account for the lion’s share of the query’s execution latency. Nevertheless, this is a great improvement.

      In CockroachDB 19.2, you will be able to enjoy these performance benefits on many common scan, join and aggregation queries. Here’s a demonstration of the original sample query from this blog post, which runs nearly 2 times as fast with our new vectorized engine:

      oot@> CREATE TABLE inventory (id INT PRIMARY KEY, price FLOAT);
      Time: 2.78ms
      root@> INSERT INTO inventory SELECT id, random()*10 FROM generate_series(1,10000000) g(id);
      INSERT 100000
      Time: 521.757ms
      root@> EXPLAIN SELECT count(*) FROM inventory WHERE price * 0.8 > 3;
             tree      |    field    |     description
       ---------------- ------------- --------------------- 
                       | distributed | true
                       | vectorized  | true
        group          |             |
         │             | aggregate 0 | count_rows()
         │             | scalar      |
         └── render    |             |
              └── scan |             |
                       | table       | inventory@primary
                       | spans       | ALL
                       | filter      | (price * 0.8) > 3.0
      (10 rows)
      Time: 3.076ms

      The EXPLAIN plan for this query shows that the vectorized field is true, which means that the query will be run with the vectorized engine by default. And, sure enough, running this query with the engine on and off shows a modest performance difference:

      root@> SELECT count(*) FROM inventory WHERE price * 0.8 > 3;
      (1 row)
      Time: 3.587261s
      root@> set vectorize=off;
      Time: 283µs
      root@> SELECT count(*) FROM inventory WHERE price * 0.8 > 3;
      (1 row)
      Time: 5.847703s

      In CockroachDB 19.2, the new vectorized engine is automatically enabled for supported queries that are likely to read more rows than the vectorize_row_count_threshold setting (which defaults to 1,024). Queries with buffering operators that could potentially use an unbounded amount of memory (like global sorts, hash joins, and unordered aggregations) are implemented but not yet enabled by default. For full details of what is and isn’t on by default, check out the vectorized execution engine docs. And to learn more about how we built more complicated vectorized operators check out our blog posts on the vectorized hash joiner and the vectorized merge joiner

      Please try out the vectorized engine on your queries and let us know how you fare.

      Thanks for reading!


      YouTube is rolling out updated terms of service on December 10th, and a new line acts as a reminder that the company doesn’t have to keep any video up that it doesn’t want to.

      “YouTube is under no obligation to host or serve content,” the new terms of service policy reads.

      It’s another way of saying that just because YouTube is a relatively open platform, it doesn’t mean that the company is required to keep videos up.

      YouTube has faced criticism from all sides over its video removal process. Some critics argue that YouTube could do more to take down videos that butt up against the company’s rules but don’t outright violate them; others argue that YouTube ought to be a fully open platform and shouldn’t control what remains up and what doesn’t. Executives have long defended the platform as a champion of free speech, but have started to clamp down on the type of videos allowed to circulate.

      Companies update their terms of service all the time — this is YouTube’s third change just in 2019. These latest updates seemingly coincide with upcoming changes YouTube will make in accordance with new Federal Trade Commission guidelines for YouTube, although a YouTube rep denies these changes were made because of the FTC ruling.

      Specifically, many of the updates center around who’s using the platform and protection for children. In September, the Federal Trade Commission issued a $170 million fine against Google for alleged violations of the children’s online privacy protection act (COPPA). YouTube also agreed to make changes to further protect children’s privacy and comply with the law.

      The terms of service now come with an updated section on parental responsibility when children use the platform, and have more clearly stated the age requirements per country for using YouTube. This also includes a notice stating “if you are a minor in your country, you must always have your parent or guardian’s permission before using the service.”

      YouTube said the terms of service are being changed “in order to make them easier to read and to ensure they’re up to date,” a spokesperson told The Verge.

      We’re not changing the way our products work, how we collect or process data, or any of your settings,” the spokesperson added.

      Other changes just seem to give YouTube more power. One of the most controversial clauses that creators have tweeted about since YouTube began alerting people to the upcoming changes has to do with termination. Various YouTubers have tweeted out a segment from the terms of service that states “YouTube may terminate your access, or your Google account’s access to all or part of the service if YouTube believes, in its sole discretion, that provision of the service to you is no longer commercially viable.” Some creators have read this as saying that YouTube can simply terminate channels and accounts if they’re not earning revenue.

      A version of that line has been in YouTube’s terms of service since early 2018, however. This update just modifies the wording to give YouTube more leeway to make the determination. It now says that YouTube has the “sole discretion” to terminate an account, whereas before it said that YouTube must “reasonably believe” it should do so. A YouTube spokesperson told The Verge that the company is “also not changing how we work with creators, nor their rights over their works, or their right to monetize.”

      Update (November 11th, 10:05am ET): The story has been updated to include additional comment from YouTube about monetization concerns from creators.


      I will try to keep it short, because I have tendency to make stories longer than necessary. My relatives already mocked my rambling when I was a teenager. Complained that my stories had no begin and no end.

      I noticed that in my writing too. It’s good practice to make blog posts short. But I like to add details, anecdotes and silliness.

      Luckily, there’s an old typographical solution for that: sidenotes. I can put all the info that’s not critical for the main story there, so you, dear reader, can just skip that kind of frivolry, if you’re not interested.

      Handwritten page with wide margins containing notes
      This late-13th century page has wide margins for notes. Although the notes may have been added later in this example, it’s not uncommon for printed texts to have sidenotes from the author.

      On this website, sidenotes appear next to the article when there’s enough space. On small screens, they’re hidden by default and can be made visible with a tap. Here’s
      an example

      Actually, here’s the example! This is the actual sidenote.


      The problem is, it’s hard to find a properly semantic HTML solution to do this.


      My sidenotes have two parts:

      1. The sidenote’s content
      2. A word or a span of words that it refers to.

      My additional requirements for them are:

      Elements that don’t work

      Here are some options that I considered, with help from answers on my StackOverflow question.

      Footnotes1 appear below the text. That requires one click to go there and another to go back to the article. Way too much effort. Moreover, most of my sidenotes make no sense without the context of the sentence they refer to. That also rules out the

       Some sentence with a<span class="sidenote sidenote--collapsed">
         class="sidenote sidenote--collapsed">
            title="Here's the content of the sidenote."
            Here's the content of the sidenote
      for the sidenote.

      I wrap the whole thing in a span,

      As a nice side effect, on large screens, where the sidenote appears next to the body text, both asterisks get hover styling when one either the content or the inline span has the mouse cursor over it.

      to keep the required JavaScript simple.

      Then there’s the inline phrase that gets the asterisk. The


      Duet provides a set of organized tools, patterns and practices that work as the foundation for LocalTapiola and Turva digital products and experiences.

      Get Started

      Follow these practical guides to get started designing and developing experiences for LocalTapiola and Turva.

      Read the latest updates from Duet’s core team. Here we’ll highlight every month new features and progress from Duet.


      The best way to learn something is by doing. This tutorial will help you learn how to build your own todo list app with React hooks and TypeScript. Try this easy tutorial, build your own todo list app, and get better in JavaScript, React and TypeScript.

      You can find the code on my GitHub.


      The goal for this tutorial is to build your own todo list app. About the app in general. This todo list app will have very simple interface and it will focus on the most important features, i.e. create, check off and delete todos. About code. You will use React and React hooks, mostly useState hook.

      There will be one occasion where you will also use useRef hook. Since this todo list app will utilize React hooks for managing state there is no need to use class components. So, you will build this app only with functional components. When it comes to styling your todo list app, you will use external CSS stylesheets.

      One last things. First every todo item will have a unique id. These ids will be generated when the todo item is created. You will use this id to mark the todo as complete or to remove it. To make this easier, while following good practices and avoiding using indexes, you will use shortid package.

      Project setup

      As the first thing let’s create the basic app for your todo list app. We can do this very fast with the help of create-react-app. You can use this package with npm init react-app react-hooks-todo-list-app-ts --typescript, npx create-react-app react-hooks-todo-list-app-ts --typescript or yarn create react-app react-hooks-todo-list-app-ts --typescript. If you don’t want to use TypeScript, omit the --typescript flag at the end of the command.

      These commands will create starting template for your todo list app, with workflow setup and almost all necessary dependencies. There is one dependency you will need to install manually, the shortid and types for this package. So, use npm i shortid and npm i -D @types/shortid, yarn add shortid and yarn add -D @types/shortid or pnpm i shortid and pnpm i -D @types/shortid.

      There are some assets, such as React logo, that came with the app template. You can remove it because you will not need it. A very simple version of your package.json should look similar to this:

        "name": "react-todo-list-hooks-ts",
        "version": "1.0.0",
        "description": "Simple Todo list app built with React hooks and TypeScript.",
        "browserslist": [
          "not dead",
          "not ie <= 11",
          "not op_mini all"
        "main": "src/index.tsx",
        "scripts": {
          "start": "react-scripts start",
          "build": "react-scripts build",
          "test": "react-scripts test --env=jsdom",
          "eject": "react-scripts eject"
        "dependencies": {
          "react": "16.11.0",
          "react-dom": "16.11.0",
          "shortid": "2.2.15"
        "devDependencies": {
          "@types/react": "16.9.11",
          "@types/react-dom": "16.9.4",
          "@types/shortid": "^0.0.29",
          "react-scripts": "3.2.0",
          "typescript": "3.7.2"

      If you decide to use TypeScript, your tsconfig should look similar to this:

          "include": [
          "compilerOptions": {
              "lib": [
              "jsx": "react",
              "target": "es5",
              "allowJs": true,
              "skipLibCheck": true,
              "esModuleInterop": true,
              "allowSyntheticDefaultImports": true,
              "strict": true,
              "forceConsistentCasingInFileNames": true,
              "module": "esnext",
              "moduleResolution": "node",
              "resolveJsonModule": true,
              "isolatedModules": true,
              "noEmit": true

      As the last thing, below is the final structure of this todo list app project. You can use this as you work on this tutorial to orient yourself. With that, you are ready to start working on your todo list app.

      │ ├─favicon.ico
      │ ├─index.html
      │ ├─manifest.json
      │ └─robots.txt
      │ ├─components
      │ │ ├─todo-form.tsx
      │ │ ├─todo-item.tsx
      │ │ └─todo-list.tsx
      │ ├─styles
      │ │ └─styles.css
      │ ├─index.tsx
      │ ├─interfaces.ts
      │ └─react-app-env.d.ts
      ├─ package.json
      └─ tsconfig.json


      The first thing to do is create interfaces for your todo list app. You will use them to define the shape of component props and the todo object, or to type them. If you decided to use pure JavaScript, instead of TypeScript, you can skip this step. You will need to create four interfaces.

      One for todo (todo object), one for todo form one for todo list and one for todo item. The todo object will have three properties, id, text, isCompleted. The TodoForm props contain array of todo objects and handleTodoCreate method. The TodoList props will contain handleTodoUpdate, handleTodoRemove, handleTodoComplete and handleTodoBlur methods and array of todo objects.

      The TodoItem props will contain handleTodoUpdate, handleTodoRemove, handleTodoComplete, handleTodoBlur and a single todo object.

      // Todo interface
      export interface TodoInterface {
        id: string;
        text: string;
        isCompleted: boolean;
      // Todo form interface
      export interface TodoFormInterface {
        todos: TodoInterface[];
        handleTodoCreate: (todo: TodoInterface) => void;
      // Todo list interface
      export interface TodoListInterface {
        handleTodoUpdate: (event: React.ChangeEvent, id: string) => void;
        handleTodoRemove: (id: string) => void;
        handleTodoComplete: (id: string) => void;
        handleTodoBlur: (event: React.ChangeEvent) => void;
        todos: TodoInterface[]
      // Todo item interface
      export interface TodoItemInterface {
        handleTodoUpdate: (event: React.ChangeEvent, id: string) => void;
        handleTodoRemove: (id: string) => void;
        handleTodoComplete: (id: string) => void;
        handleTodoBlur: (event: React.ChangeEvent) => void;
        todo: TodoInterface;

      Todo item component

      The first component you will build will be todo item. When you add new todo on your todo list, this item component will represent it. This component will be composed of a couple of elements. First, there will be a div with span elements for checking off the todo. Unchecked item will contain empty span, styled into a transparent circle with border.

      Checked off todo item will contain span with check mark HTML entity, inside a green circle. The wrapper div will have onClick handler to check/uncheck the todo. Next will be another div with input. You will use this input element to render the title, or the text, of the todo. This is the simplest way to make every todo item editable, through input elements.

      You will pass the title be done through value attribute, from todo object passed through props. Aside to this, this input will have two handler methods, one for onBlur and one for onChange. The last element will be also a div, now with “x” entity/icon. You will use this element to remove the todo item.

      This div will have one onClick handler. As all the previous data, and handler methods, this too will be passed thorough props.

      If you use TypeScript, import the TodoItemInterface interface from interfaces.ts and to use it to type props of this component. After this, type the onChange handler on input element with React.ChangeEvent because we are attaching onChange handler to input element.

      // Import dependencies
      import * as React from 'react'
      // Import interfaces
      import { TodoItemInterface } from './../interfaces'
      // TodoItem component
      const TodoItem = (props: TodoItemInterface) => {
        return (
      props.handleTodoComplete(}> {props.todo.isCompleted ? ( ) : ( )}
      ) => props.handleTodoUpdate(event,} />
      props.handleTodoRemove(}> ⨯
      ) } export default TodoItem

      Todo list component

      The todo list will be the second component you will create. This component will be very simple. This component will accept handler methods for the TodoItem, you’ve just created, and array of todo objects through props. The component itself will contain one div as a wrapper element.

      Inside this div will be a list, one ul element. Inside this element, you will use map() to iterate over the array of todo objects, and create one li element with one TodoItem component for every todo object. You will then pass the individually todo objects to the TodoItem component, along with handler methods.

      For TypeScript, remember to import TodoListInterface interface and use it to type the props of the TodoList component.

      // Import dependencies
      import * as React from 'react'
      // Import TodoItem
      import TodoItem from './todo-item'
      // Import interfaces
      import { TodoListInterface } from './../interfaces'
      // TodoList component
      const TodoList = (props: TodoListInterface) => {
        return (
        { => (
      • ))}
      ) } export default TodoList

      Todo form component

      The todo “form” will is the first component where you will use useState React hook. It is also here where you will use the useRef React hook. You will use the useState hook to store the text passed to the input element, text for the todo title before you will create new todo item.

      You will use the useRef hook to store reference to this input. The way you create new todo is by pressing “Enter” key, while you type some text inside that input. So, when you press “Enter” key you will use this reference to reset the input, by setting the value to empty string. This input will also have two handler methods for onChange and onKeyPress.

      These two handler methods will be handleInputChange and handleInputEnter. The first, for onChange, will update the form state when you write something into the input, some todo title/text. The second, for onKeyPress, will create new todo object and reset the input field when it detect pressing “Enter” key.

      Do you remember the shortid package? It is here where you are going to use this dependency. Inside the handleInputEnter function, inside the new todo object, you will use shortid to generate unique id for every new todo. Don’t worry. This will be simple. All you need is to call generate() on shortid and your new id is ready.

      Lastly, few things for TypeScript. First, import TodoInterface and TodoFormInterface interfaces. Then, use the TodoInterface interface to type the new todo object inside handleInputEnter, and TodoFormInterface interface to type the props of TodoForm. Then, type the useRef hook, using and set it to null.

      After that, there are also two events. For the first one, you can type it with React.ChangeEvent because we are attaching onChange handler to input element. For the second, you can type it with React.KeyboardEvent because we are “listening” for key press.

      // Import dependencies
      import * as React from 'react'
      import shortid from 'shortid'
      // Import interfaces
      import {TodoInterface, TodoFormInterface} from './../interfaces'
      // Todo form component
      const TodoForm = (props: TodoFormInterface) => {
        // Create ref for form input
        const inputRef = React.useRef(null)
        // Create form state
        const [formState, setFormState] = React.useState('')
        // Handle todo input change
        function handleInputChange(event: React.ChangeEvent) {
          // Update form state with the text from input
        // Handle 'Enter' in todo input
        function handleInputEnter(event: React.KeyboardEvent) {
          // Check for 'Enter' key
          if (event.key === 'Enter') {
            // Prepare new todo object
            const newTodo: TodoInterface = {
              id: shortid.generate(),
              text: formState,
              isCompleted: false
            // Create new todo item
            // Reset the input field
            if (inputRef && inputRef.current) {
              inputRef.current.value = ''
        return (
      handleInputChange(event)} onKeyPress={event => handleInputEnter(event)} />
      ) } export default TodoForm

      Main (index) component

      You are almost done. There is just one component you need to build. This is the main TodoListApp component. This component will implement methods for creating, updating, removing and completing your todos. This will be done via handleTodoCreate, handleTodoUpdate, handleTodoRemove and handleTodoComplete methods.

      It is also this component where you will store all existing todos, using the useState React hook. So, let’s build this component, step by step.


      First, as usually, you will need to import dependencies for react. Now, you will also need to import render method from react-dom. This is because you will render the TodoListApp component, your todo list app, in the DOM.

      You will also import TodoForm and TodoList components so you can later return, and render, them. When you import these components you should also import the main external CSS stylesheet, so you can later style your todo list app.

      For TypeScript, you will need to import the TodoInterface interface. You will use this interface a couple of times, to type todos state and some method parameters.

      // Import dependencies
      import * as React from 'react'
      import { render } from 'react-dom'
      // Import components
      import TodoForm from './components/todo-form'
      import TodoList from './components/todo-list'
      // Import interfaces
      import { TodoInterface } from './interfaces'
      // Import styles
      import './styles/styles.css'

      Creating todo list app state

      The state fo your todo list app will be simple. It will be an array of objects. One object will represent one existing todo. In the beginning, you will initialize the todos state as an empty array.

      For TypeScript, make sure to use the TodoInterface interface along with []. This will tell TypeScript you are “talking” about an array of todos objects, not just one todo object.

      // TodoListApp component
      // ....
      const TodoListApp = () => {
        const [todos, setTodos] = React.useState([])
        // ...

      Creating new todos

      The first method for your todo list app will be method to create new todos, handleTodoCreate method. This method will accept one parameter, a todo object. The way it will work is simple. First, it will create new todo list app state, the newTodosState, by copying the current todo list app state.

      Next, it will take the todo object, you pass as parameter when you call this method, and add that todo to the new todo list app state, the newTodosState, using push() method. After that, it will update the todo list app state, using setTodos() method.

      About TypeScript. You will use the TodoInterface interface to type the todo parameter. You will also use this interface to type the newTodosState variable. In this case, you will again have specify you want an array of todo objects, adding [] after the TodoInterface.

        // ....
        // Creating new todo item
        function handleTodoCreate(todo: TodoInterface) {
          // Prepare new todos state
          const newTodosState: TodoInterface[] = [...todos]
          // Update new todos state
          // Update todos state
        // ....

      Updating existing todos

      Next, you will need method to update existing todos, handleTodoUpdate method. This method will accept two parameters, event and id. The id will be unique id generated for every todo item/object. Similarly to handleTodoCreate, this method will also start by creating new todo list app state, newTodosState, by copying the current todo list app state.

      Next, it will use find() method to iterate over the newTodosState variable and find the correct todo item to update, using the id passed as argument. When it finds the correct todo item/object, it will change the value of its text key. New value will come from the value of the input inside specific todo item.

      The last step is updating the todo list app state, using newTodosState and setTodos() method.

      For TypeScript, use the TodoInterface interface to type the todo parameter passed to find() method. Use it also for the newTodosState variable, along with [] after the TodoInterface. Lastly, type the id parameter as a string.

        // ....
        // Update existing todo item
        function handleTodoUpdate(event: React.ChangeEvent, id: string) {
          // Prepare new todos state
          const newTodosState: TodoInterface[] = [...todos]
          // Find correct todo item to update
          newTodosState.find((todo: TodoInterface) => === id)!.text =
          // Update todos state
        // ....

      Removing existing todos

      Removing todos will be done using filter() method. First, you will create new todo list app state, newTodosState, by copying the current todo list app state. During this, you will use the filter() method to remove the todo you want to remove. This will be done by comparing id of all todos with the id of todo you want to remove.

      When this is done, you will use this new, filtered, state to update the todos state with the setTodos() method.

      For TypeScript, use the TodoInterface interface to type the todo parameter passed to filter() method. Then, use it also for the newTodosState variable, along with [] after the TodoInterface. Finally, type the id parameter as a string.

        // ....
        // Remove existing todo item
        function handleTodoRemove(id: string) {
          // Prepare new todos state
          const newTodosState: TodoInterface[] = todos.filter((todo: TodoInterface) => !== id)
          // Update todos state
        // ....

      Completing todos

      The method for completing todos will look very similar to handleTodoUpdate method. First, it will copy the current todo list app state and store it in newTodosState variable. Then, it will use find() method to find specific todo item/object in todos state.

      This time, it will negate the value of isCompleted key of the specific todo item/object. After this, it will use the setTodos method to update todos state.

      Now, about TypeScript. First, use the TodoInterface interface to type the todo parameter passed to find() method. Next, use this interface also for the newTodosState variable, again with [] after the TodoInterface. The last type will be for the id. This will be a string.

        // ....
        // Check existing todo item as completed
        function handleTodoComplete(id: string) {
          // Copy current todos state
          const newTodosState: TodoInterface[] = [...todos]
          // Find the correct todo item and update its 'isCompleted' key
          newTodosState.find((todo: TodoInterface) => === id)!.isCompleted = !newTodosState.find((todo: TodoInterface) => === id)!.isCompleted
          // Update todos state
        // ....

      Ensuring every todo has title

      The last thing. When you edit existing todo there should be some warning if you leave the text/title empty. To get this done you can watch change on input element inside every todo. Then, you can check its value is not an empty string, the length of the value is bigger than “0”.

      If there is an empty string, you will add specific CSS class. When you input some text, you will remove that CSS class. This CSS class will mark the input with red border. You will define this class in your CSS stylesheet later.

      As usually, the TypeScript. This will be quick. All there is to type is the event passed as parameter. Since you are attaching a onChange event handler on input element, you can use React.ChangeEvent.

        // ....
        // Check if todo item has title
        function handleTodoBlur(event: React.ChangeEvent) {
          if ( === 0) {
          } else {
        // ....

      Returning all components

      Your todo list app is almost finished. Now, you now need to take all the components you’ve built so far, and imported in component, and return them. Make sure to provide all components with necessary props. After that, you can use the render() method and render the TodoListApp in the DOM.

        // ...
        return (
      {/* Todo form component */} {/* Todo list component */}
      ) } // Render the App in the DOM const rootElement = document.getElementById('root') render(, rootElement)

      Putting it all together

      You wrote a lot code in this main component. Let’s put it all together to make it more clear.

      // Import dependencies
      import * as React from 'react'
      import { render } from 'react-dom'
      // Import components
      import TodoForm from './components/todo-form'
      import TodoList from './components/todo-list'
      // Import interfaces
      import { TodoInterface } from './interfaces'
      // Import styles
      import './styles/styles.css'
      // TodoListApp component
      const TodoListApp = () => {
        const [todos, setTodos] = React.useState([])
        // Creating new todo item
        function handleTodoCreate(todo: TodoInterface) {
          // Prepare new todos state
          const newTodosState: TodoInterface[] = [...todos]
          // Update new todos state
          // Update todos state
        // Update existing todo item
        function handleTodoUpdate(event: React.ChangeEvent, id: string) {
          // Prepare new todos state
          const newTodosState: TodoInterface[] = [...todos]
          // Find correct todo item to update
          newTodosState.find((todo: TodoInterface) => === id)!.text =
          // Update todos state
        // Remove existing todo item
        function handleTodoRemove(id: string) {
          // Prepare new todos state
          const newTodosState: TodoInterface[] = todos.filter((todo: TodoInterface) => !== id)
          // Update todos state
        // Check existing todo item as completed
        function handleTodoComplete(id: string) {
          // Copy current todos state
          const newTodosState: TodoInterface[] = [...todos]
          // Find the correct todo item and update its 'isCompleted' key
          newTodosState.find((todo: TodoInterface) => === id)!.isCompleted = !newTodosState.find((todo: TodoInterface) => === id)!.isCompleted
          // Update todos state
        // Check if todo item has title
        function handleTodoBlur(event: React.ChangeEvent) {
          if ( === 0) {
          } else {
        return (
      ) } const rootElement = document.getElementById('root') render(, rootElement)


      Your todo list app is ready to go. Well, almost. There is a lot of space for some styling. Here are some styles you can use to make your todo list app look better.

      /* Default styles*/
      html {
        box-sizing: border-box;
      *::after {
        box-sizing: inherit;
      body {
        min-height: 100vh;
      body {
        margin: 0;
      .todo-list-app {
        display: flex;
        flex-flow: column nowrap;
      #root {
        align-items: center;
        width: 100%;
      /* Todo list app styles  */
      .todo-list-app {
        padding-top: 32px;
        width: 100%;
        max-width: 480px;
      /* Todo form styles */
      .todo-form input,
      .todo-item {
        border: 1px solid #ececec;
      .todo-form input {
        padding: 0 14px;
        width: 100%;
        height: 48px;
        transition: .25s border ease-in-out;
      .todo-form input:focus {
        outline: 0;
        border: 1px solid #3498db;
      /* Todo list styles */
      .todo-list ul {
        padding: 0;
        margin: 0;
      .todo-list li {
        list-style-type: none;
      /* Todo item styles */
      .todo-item {
        display: flex;
        flex-flow: row nowrap;
        align-items: center;
        padding: 8px;
      .todo-form   .todo-list ul .todo-item {
        border-top: 0;
      .todo-item-input-wrapper {
        flex-grow: 1;
        padding: 0 16px;
      .todo-item input {
        width: 100%;
        border: 0;
        border-bottom: 1px solid transparent;
        transition: .25s border-bottom ease-in-out;
      .todo-item input:focus {
        outline: 0;
        border-bottom: 1px solid #3498db;
      .todo-item .todo-input-error {
        border-bottom: 1px solid #e74c3c;
      .todo-item span {
        display: flex;
        align-items: center;
        justify-content: center;
        width: 32px;
        height: 32px;
        border-radius: 50%;
        border: 1px solid #ececec;
        transition: .25s all ease-in-out;
      .todo-item-unchecked:hover {
        background: hsla(168, 76%, 42%, .25);
        border: 1px solid hsl(168, 76%, 42%, .25);
      .todo-item-checked {
        color: #fff;
        background: #1abc9c;
        border: 1px solid #1abc9c;
      .item-remove {
        display: flex;
        padding-left: 8px;
        padding-right: 8px;
        font-size: 28px;
        cursor: pointer;
        line-height: 1;
        color: #ececec;
        transition: .25s color ease-in-out;
      .item-remove:hover {
        color: #111;

      Conclusion: How to Build a Todo List App with React Hooks and TypeScript

      Congratulations, you’ve just built your own todo list app using React hooks and TypeScript! However, you don’t have to stop here. So, go ahead. Take this todo list app and make it better. Think about what features you would like it to have. Then, don’t wait for anything. Try to implement them by yourself. Have fun!

      If you liked this article, then please consider subscribing.


      About The Author

      Suzanne Scacca is a former WordPress implementer, trainer and agency manager who now works as a freelance copywriter. She specializes in crafting marketing, web …
      More about

      We’ve all dealt with analysis paralysis before. Do I choose vanilla, strawberry or one of the 30 other flavors? And do I go with the cone or cup? When it comes to putting our visitors on the spot, giving them too many options hurts their decision-making ability along with how they feel about the experience as a whole. While you can’t do anything about how much “stuff” is on your site, you can design it in a way that makes it easier for your visitors to make a choice.

      As a web designer, you do your best to remove friction from the decision-making process. You place only one CTA above the fold. You keep interactive elements to a minimum. You make the menu only as large as it needs to be.

      But what happens when the content itself causes analysis paralysis?

      There’s an overabundance of choice all around the web, from e-commerce stores with thousands of products to content generation machines pushing out new posts every day. While you can’t do anything to stop the flood of information or items going out to your visitors, you can design your interfaces in a way that makes the decision-making process easier to bear. What’s more, you can help them walk away feeling more confident with their choice, too.

      Let’s look at what it is about the psychology of choice that can be detrimental for conversions and what you can do to keep your PWA visitors from succumbing to it.

      Why Analysis Paralysis Is Hurting Your Conversion Rate

      Paralysis by analysis is what happens when someone finds a situation or decision too overwhelming or difficult. Despite thinking over the circumstances or options, they’re unable to make a clear choice, often leading to no action at all.

      It’s the exact opposite of what we want to happen with our visitors and customers. And, yet, we see it all the time.

      Take Amazon, with its hundreds of millions of products. Let’s say a shopper is looking for socks for an upcoming snowboarding trip. So, they search for “snowboard socks”:

      Amazon search snowboard socks
      A search for “snowboard socks” on Amazon yields 728 results. (Source: Amazon) (Large preview)

      There are 728 matching results for snowboard socks. While that’s nothing compared to the size of Amazon’s marketplace, it’s still way too many results to sift through on one’s own.

      So, the shopper decides to narrow their search down to “knee-high snowboard socks antimicrobial compression”, which hits all of the key features they’re looking for:

      Amazon search knee high snowboard socks antimicrobial compression
      A search for “knee high snowboard socks antimicrobial compression” on Amazon yields 8 results. (Source: Amazon) (Large preview)

      That takes the list of 728 products down to a more manageable 8. However, there are a number of problems with this.

      For starters, Amazon doesn’t actually present only eight options. The first thing it shows is a banner with sponsored results:

      Amazon search - sponsored products banner
      Amazon displays sponsored products on a search results page before organic products. (Source: Amazon) (Large preview)

      The next two rows do contain organic search results. However, the products either don’t have any reviews or high enough ratings to rave about.

      To make matters worse, Amazon adds three more rows of sponsored products to the end of this page:

      Amazon search results overrun with sponsored products
      An Amazon search results page promising 8 results is full of sponsored products. (Source: Amazon) (Large preview)

      Essentially, Amazon has tripled the number of products the shopper has to look at now. Not only that, but one could argue that it’s polluted the search process with so many paid products.

      It’s just a poor experience all around.

      It’s a similar story on mobile. The main difference is that, when the narrow search is applied, only five results remain. That said, Amazon continues to muddle search results by prioritizing sponsored posts. This is what the shopper sees above the fold:

      Amazon mobile search results with sponsored products
      Amazon search results display only sponsored products above the fold on mobile. (Source: Amazon) (Large preview)

      Once the shopper scrolls, they’ll see two organic products (with no ratings, much less) before being shown yet another sponsored product:

      Amazon mobile search with organic and paid products
      Amazon fills mobile search results with paid product listings. (Source: Amazon) (Large preview)

      What should have been a succinct product search page ends up running on and on with sponsored listings and Amazon recommendations. As you can imagine, this is an even bigger problem for mobile shoppers who don’t have the patience or time to waste.

      How To Simplify Decision-making With Design

      Psychologist Barry Schwartz gave a TED talk on this exact problem and explained how it doesn’t just lead to abandoned purchases, but to less satisfying purchases.

      Essentially, Schwartz argues that too many choices cause consumers to:

      • Have too-high expectations whereby no option ever seems perfect or satisfying.
      • Focus on minute details or features they missed out on by not making a different choice.
      • Regret the option they settled on even if it proved to be the best.
      • Blame themselves for spending too much time analyzing a decision and still making the “wrong” choice.

      In other words, an abundance of choice puts your audience in the wrong state of mind. And if your site or PWA can’t afford to process returns or watch visitors walk away at a rate that a company like Amazon can, then you can’t let paralysis analysis become part of the decision-making process to begin with.

      Here are some things you can do to make decision-making more bearable, even with too many options available.

      Prioritize Your “Big Rock”

      There’s a productivity hack called the “Big Rocks” method. Essentially, it says that if you want to get more stuff done, you have to take on your biggest challenge or priority first thing. All of the smaller tasks and requests nagging at you get slotted in once you’ve tackled the most critical ones.

      And now I’m suggesting you do something similar with your design.

      Basically, what you want to do is this:

      • Look at all of the “stuff” you’re trying to put in front of your visitors.
      • Ask yourself (or your client): “Which one takes top priority right now?”
      • Then, stick it to the top of your home page.

      Why do this? Well, for starters, it’s much less overwhelming to show one thing than to throw everything you have at your visitors right away — or to force them to dig around and figure out where to start. Plus, chances are good that there’s something especially pressing you want every visitor to see.

      BarkShop shows us one way you might do this:

      BarkShop Halloween sale banner
      The home page of the BarkShop website displays a Halloween sale banner. (Source: BarkShop) (Large preview)

      Notice how the top of the page isn’t littered with an inventory. Instead, the primary focus is on the Halloween sale.

      This is clearly BarkShop’s big rock right now. If they don’t get their Halloween items out the door by October 31, they run the risk of losing money on the seasonal inventory. And while they could leave it up to their visitors to assume there are Halloween toys and treats available, why do that?

      It’s hard enough showing up on a site like this and deciding what you’re going to buy for your dog this month. So, let your big rock become their guide.

      You can see that it’s not just about the Halloween line either. Just below the banner, BarkShop has prioritized its top-trending toys. This is another trick you can use when designing product inventories and news sites. Display the most popular or top-rated/shared items first. There’s a greater likelihood they’re going to click on something that others have shown interest in than a bunch of random items they have to explore on their own.

      Another way you might tackle big rocks in design is to go the way of Apple:

      Apple home page promo for iPhone 11 Pro
      The Apple home page banner advertises only its iPhone 11 Pro. (Source: Apple) (Large preview)

      Again, Apple could’ve shown a bunch of its iPhones and iPads here or pointed visitors to different categories of products and accessories to explore. Instead, it’s put its big rock right up front: the iPhone 11 Pro.

      Sure, there are probably plenty of Apple customers who come here looking for older models. But what makes more sense?

      • Showing a bunch of similar-looking smartphone graphics that visitors are immediately put into analysis paralysis mode with?
      • Or showing them the latest model that everyone wants?

      You can do this with other kinds of websites, too. Blogs, for instance, will use the sticky post feature to show off their “big rocks” to every visitor who stops by. This might be the most popular post of all time or it could be something relevant to something happening at the moment.

      Whatever it is, there’s a conscious decision made to stop visitors in their tracks and give them a moment of calm before they have to enter decision-making mode.

      Gate Off the Choices

      While you want your visitors to know that there’s a plenitude of things available, you don’t need to tell them how much there is. It’s only going to add to the pressure of the decision-making process.

      So, whenever you can, gate off the choices.

      eHealth Insurance is a website where Americans can purchase health insurance for themselves or their companies. With a wide variety of healthcare providers and dozens of plans available for each, a service like this is necessary for insurance-wielding U.S. citizens.

      eHealth Insurance home page
      The eHealth Insurance website gates off its insurance provider options and plans. (Source: eHealth Insurance) (Large preview)

      The only decision it asks visitors to make is what kind of insurance they’re looking for. It then asks them to fill out a simple enough form. It’s what eHealth Insurance uses to pare down the options:

      eHealth Insurance qualification form
      The eHealth Insurance form asks qualification questions of visitors to help list the right options. (Source: eHealth Insurance) (Large preview)

      Once filled out, eHealth Insurance shows the user a list of providers and plans that are available in their area. This keeps the consumer from having to:

      1. Visit individual health insurance websites to do their own research.
      2. Sift through hundreds of options all at once (some of which they’d probably be ineligible for and others that would just be a bad fit).

      Websites like these typically allow you to compare up to three items at once, making the decision-making process even simpler.

      Another way to gate off your choices is by asking visitors to start narrowing their choices from the moment they arrive as Sotheby’s International Realty does:

      Sotheby’s home page rental search
      The first thing the Sotheby’s International Realty PWA asks visitors to do is narrow down their search results. (Source: Sotheby’s International Realty) (Large preview)

      This way, consumers aren’t distracted by all the great-looking properties or even the attractive prices. They’re still focused on taking action (e.g. finding a rental), but it’s more about taking baby steps towards the desired result which makes it much less intimidating. It’s also going to lead to a more satisfying result in the end as they won’t spend time looking at rentals they can’t afford, that don’t allow cats or that are too far away from their kids’ schools.

      Sotheby’s location search results
      Sotheby’s helps visitors narrow their searches by location. (Source: Sotheby’s International Realty) (Large preview)

      The next page of the Sotheby’s search process starts to show matching results, but not before letting them know that there are “55 Luxury Homes for Rent in London”. And if that number is just too much to handle, that’s fine. Directly to the right of that note is a filters widget.

      Sotheby’s rental filters
      Sotheby’s helps visitors narrow down their choices even further with comprehensive filters. (Source: Sotheby’s International Realty) (Large preview)

      Sotheby’s filters are great. Not only are all the essentials covered, but it’s even divided up the filters by category.

      Let’s recap how smooth this experience is going to be for already-anxious home-buyers or renters:

      • The first thing they see on the Sotheby’s home page is a search bar.
      • If they do the search, the next thing they see is the number of properties in the area.
      • If that number is too intimidating, the filters widget is there to help them narrow the list even more.

      By keeping the rentals off of the home page and even away from the top of the fold of the internal page, Sotheby’s can control how calmly visitors go through the decision-making process.

      Enable Side-by-side Comparison

      Internally, there’s not a lot you can do about choice overload except giving your visitors a really great set of sorting and filtering tools. Even then, once they start to see lists of related products, recommendations on previous purchases and so on, the analysis paralysis is going to creep back in.

      One thing you can do to make this less stressful is by including side-by-side comparisons.

      It’s similar to laying out pricing plans side-by-side. Rather than make prospective customers review their options one-by-one, stack up the top few choices. Then, align the specifications so it’s as easy as looking across a row to weed something out because it doesn’t fit the budget or the product is too large.

      Although Amazon doesn’t handle analysis paralysis flawlessly, I do like what it does with its side-by-side product comparisons:

      Amazon comparison table
      Amazon shows a table of similar products to customers for easy comparison. (Source: Amazon) (Large preview)

      I can’t tell you how many times I’ve struggled to make a decision on Amazon, only to discover one of these comparison tables and immediately been able to make up my mind. It’s just so much easier to see a bunch of lookalike products all at once and to say “This one definitely won’t fit in my kitchen” or “That’s the exact color I’m looking for”.

      You can do this with other kinds of websites, too. Verizon Wireless, for example, uses a side-by-side comparison to make choosing between its plans easier.

      Verizon Wireless 2GB wireless plan details
      Verizon Wireless displays the details for its 2GB wireless plan. (Source: Verizon Wireless) (Large preview)

      There are scroller dots below this block that indicate to customers that there’s more to be found. All they have to do is scroll to reveal more plan options. And because the scroller breadcrumbs are kept to a reasonable amount, this doesn’t seem like that burdensome of a task.

      The next block over, for instance, contains the information for the 4GB plan:

      Verizon Wireless 4GB wireless plan details
      Verizon Wireless displays the details for its 4GB wireless plan. (Source: Verizon Wireless) (Large preview)

      Even though the specs can’t be seen see side-by-side, the details are simply broken down and are consistently laid out. So, when someone moves from plan to plan, the same details are in the same place which makes flipping back and forth quite easy.

      Another thing I really like about this is the summary provided at the top of each card. The 2GB one tells customers right away that it’s the best plan if you mostly talk and text whereas 4GB is better if you stream a lot of content and surf the web. This way, if all the technical details don’t mean much to customers, the layman’s summary will help them more confidently decide.

      While I realize side-by-side comparisons might be something you’d normally try to avoid on mobile screens, these two examples show that it is possible to do so without introducing too much friction to the experience.

      Wrapping Up

      As I said before, you can’t do anything about scaling back the multitude of options your clients want to present their audiences with. If they want to sell thousands of products to customers that are demanding them, then good for them.

      That said, the way you design around these options can have an impact on how well or poorly they’re received. Just remember what Barry Schwartz teaches us about the psychology of choice. If your visitors, users or customers walk away feeling overwhelmed, drained or disappointed with the experience, it’s going to cost you. So, be mindful of how much you present the options to them.

      Smashing Editorial(ra, yk, il)



      Styles are applied to a web page using CSS selectors; selectors which make it possible for you to target a specific element or sets of elements. Usually, when you are starting with CSS, you’ll make use of element selectors before pivoting to use classes and IDs.

      As powerful as these selectors are, they can be quite limiting, making it impossible to select an element based on its state. If you’ve ever worked with frontend frameworks like React and Vue, you possibly understand what state means. I am not referring to the overall state of the application, but the state of elements within the HTML code.

      The humble link is a simple example–it can be created using an anchor tag . The link can then have different states:

      • before the link is clicked
      • when a user hovers a mouse cursor over the link
      • when the link has been visited

      Selectors used to target elements depending on their state are called pseudo-classes. nth-child and nth-of-type are pseudo-classes which select an element based on its position (the position of the element is also considered a state). Let’s take a closer look.

      nth-child and nth-of-type pseudo classes

      How to Work with :nth-child()

      The nth-child pseudo-class is used to select an element based on its position in a list of siblings. There are things to take note of here:

      • Siblings: where a parent and children elements are present. In selecting a sibling you’re trying to target a specific child of the chosen parent. A ul and bunch of li are examples of parent and children elements.
      • The position of the element in the list of its siblings is determined by values that are passed to the pseudo-class.

      We’ll be making use of the HTML code below to learn how nth-child works.

      • Ruby
      • Python
      • JavaScript
      • Go
      • PHP
      • Scala
      • Java
      • Kotlin
      • C
      • C

      There are two different ways of specifying the position of the element: the use of keywords and functional notations.


      If using keywords you can specify elements whose position is either an even or an odd number, like this:

      ul :nth-child(even) {
        color: red;
      ul :nth-child(odd) {
        color: gray;

      This gives us the following:


      When we specify a particular number (like the 2 in this example), we’re selecting the element in the list of siblings whose position matches the number we’ve passed. Here we’re targeting the second child of the unordered list.

      ul {
        color: gray;
      ul :nth-child(2) {
        color: red;

      You can probably imagine what the result looks like:

      One common pitfall here is after specifying the element, you might add a new element (like a heading) to the parent without realising the selected element will change. To show this, let’s add a heading to the list like this:

        Programming Languages

      • Ruby
      • Python
      • JavaScript
      • Go
      • PHP
      • Scala
      • Java
      • Kotlin
      • C
      • C

      Even though this is actually invalid use of HTML, browsers will still render it fine, and in this case the first li element will be selected as it is now the second child of the unordered list.


      Okay, now we’re taking it up a notch. Here the elements (plural) we select will be determined using functional notation. n denotes a counter and  A represents the size of the cycle, giving us a sequence. For example, when we pass 2, it will select elements in the sequence 2, 4, 6, and so on:

      ul :nth-child(2n) {
        color: red;

      To see it in action, let’s go back to our HTML code and add a few items to the list. We’ll also make the list an ordered list so that we can clearly see the items’ numbers:

      1. Ruby
      2. Python
      3. JavaScript
      4. Go
      5. PHP
      6. Scala
      7. Java
      8. Kotlin
      9. C
      10. C
      11. Cobol
      12. Fortran

      Our result is as follows:

      :nth-child(An B)

      Here we’ve added an extra calculation to our cycles: B. The elements whose position in the list of siblings matches the pattern will get selected. We need to know how the calculation happens, so let’s try a functional notation like this:

      ol {
        color: gray;
      ol :nth-child(3n 1) {
        color: red;

      This will select the items whose position matches 1, 4, 7, 10, and so on:

      The calculation starts counting from 0, which is the default value of n, and as such the elements to be styled will be calculated like this:

      • First Element: 3(0) 1 = 1.
      • Second Element: 3(1) 1 = 4.
      • Third Element: 3(2) 1 = 7.
      • Fourth Element: 3(3) 1 = 10.

      Think of it as an algebraic equation where the value of n increases arithmetically, and the element to be styled is the result of the equation. Here is another example, which you can edit yourself to see what happens:

      ol :nth-child(3n 2) {
        color: red;

      You can also use this method to select even numbers, using the formula:

      ol :nth-child(2n 0) {
        color: red;

      And odd numbers can be selected using:

      ol :nth-child(2n 1) {
        color: red;

      How to Work with :nth-of-type()

      In all the examples we have seen for the nth-child pseudo-class, it is important to note that the goal is to select elements in a list of siblings. This does not take into account the element type. To ensure the selection is also scoped to a particular type, we can make use of the nth-of-type pseudo-class.

      To see this at work let’s edit the HTML code to look like this (again, this is technically misuse of HTML, but browsers will interpret it just fine):

        This is a first paragraph

      1. Ruby
      2. Python
      3. JavaScript
      4. Go
      5. PHP
      6. Here is a paragraph

      7. Java
      8. Kotlin
      9. C
      10. C
      11. Cobol
      12. Fortran

      To select the li elements whose position is an odd number, we can do this,

      ol li:nth-of-type(odd) {
        color: red;

      which gives us:

      To select the li elements whose position is an even number, we would do

      ol li:nth-of-type(even) {
        color: red;

      You may think that using nth-child would work just as effectively as long as you’re specifying the li, for example:

      ol li:nth-child(odd) {
        color: red;

      but that isn’t the case. Try it yourself!


      These pseudo-classes come in handy when you have to select specific elements in a list of siblings. To learn more about them, check the MDN Web Docs for nth-child() and nth-of-type().

      More CSS Fundamentals

      Learn more about CSS, and CSS selectors with these fundamental guides:

      Collection of free jQuery mega menu examples and plugins.

      Collection of free jQuery mega menu examples and plugins.

      1. jQuery Menus
      2. jQuery Context Menus
      3. jQuery Fullscreen Menus
      4. jQuery Off-Canvas Menus
      5. jQuery Scrolling Menus
      6. React Menus
      7. CSS Menus
      1. jQuery Mega Menu Examples
      2. jQuery Mega Menu Plugins


      • Jacob Lett

      Made with

      • HTML / CSS / JS

      About a code

      Bootstrap 4 Mega Dropdown Menu Navbar

      Mega Dropdown Menu Navbar – Bootstrap 4 code snippet example demo.

      Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

      Responsive: yes

      Dependencies: bootstrap.css, jquery.js, bootstrap-4-latest.js


      • Mike Torosian

      Made with

      • HTML / CSS / JS

      About a code

      Flexy Mega Menu

      I saw a gif of this style menu on Scout’s page on Dribbble and decided to take a shot at it, I thought it was pretty cool to add some subtle animation to the active menu and I also wanted to use flexbox a bit to help lay this out.

      Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

      Responsive: yes

      Dependencies: jquery.js



      Made with

      • HTML / CSS / JS

      About a code

      Responsive Mega menu with Logo

      Responsive mega menu with logo in HTML, CSS and jQuery.

      Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

      Responsive: yes

      Dependencies: jquery.js


      • Martin Stanek

      Made with

      • HTML / CSS (Less) / JS

      About a code

      Responsive Bootstrap Mega Menu

      With very little effort, you can implement your own Bootstrap mega menu.

      Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

      Responsive: yes

      Dependencies: bootstrap.css, bootstrap-dropmenu.css, bootstrap.js, holder.js, jquery.js


      • Arjun Amgain

      Made with

      • HTML / CSS / JS

      About a code

      Responsive Mega Menu

      Responsive mega menu with blog, image and drop down.

      Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

      Responsive: yes

      Dependencies: animate.css, bootstrap.css, jquery.js


      • samir alley

      Made with

      • HTML / CSS / JS

      About a code

      Responsive Mega Menu

      Responsive mega navigation in jQuery.

      Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

      Responsive: yes

      Dependencies: jquery.js

      Made with

      • HTML / CSS / JS

      About a code

      hs Mega Menu – jQuery Plugin

      hs Menu is a easy to use mega menu plugin that helps you create modern navigation system with amazing features.

      Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

      Responsive: yes

      Dependencies: jquery.js

      Made with

      • HTML / CSS / JS

      About a code


      jQuery plugin to fire events when user’s cursor aims at particular dropdown menu items. For making responsive mega dropdowns like Amazon’s.

      Compatible browsers: Chrome, Edge, Firefox, Opera, Safari

      Responsive: yes

      Dependencies: bootstrap.css, jquery.js, bootstrap.js


      Color is one of the most powerful tools in website design, especially when it comes to manipulating reading flow. As a rule, the standard reading path starts from the top left corner, with the exception of some Asian and Arab cultures. People move their eyes from left to right and gradually go down. This principle works everywhere. However, it does not mean that you cannot make some exceptions.

      For instance, you can set a tagline to a big font size, and it will instantly catch an eye. Another proven way to force people to change their normal behavior is to use color that stands in sharp contrast to other units of the design. One time-proven option to do the job, regardless of the environment and neighborhood, is neon color.

      Neon colors always command attention. It is in their nature. What’s more, since this year saw numerous millennial-targeted website designs filled with gradients and juvenile details, it has become quite popular. It perfectly blends into the vibrant infantile realm – emerging into a tiny trend. Let us consider some representative examples.

      The Complete Toolbox for Web Designers

      Unlimited Downloads: 1,000,000 Web Templates, Themes, Plugins, Design Assets, and much more!

      Greenleaf Logistics

      The homepage of Greenleaf Logistics is spiced up with several modern solutions that instantly catch an eye. For example, all the images are presented as morphing blobs. In such a bizarre environment, the neon coloring nicely fits into the general aesthetic.

      The logotype, call-to-action, and “go up” button are set in an almost screaming green. They are unobtrusive focal points that force readers to follow the path established by the team.

      Example of Greenleaf LogisticsFlowhub

      Much like the previous example, here neon effects are used on almost the same range of UI elements. Yet with some tiny differences.

      For example, the team behind Greenleaf Logistics has highlighted only one word in the headline, whereas Flowhub has stressed out the entire tagline. They have used it like a shadow that changes its angle depending on the mouse position. In addition, neon green accentuates not only the words in the content, but also important details of the accompanying illustrations, images and icons.

      Example of FlowhubHerba Mojo

      Neon green is quite a universal tone that plays nicely with both light and dark themes. Consider Herba Mojo for showing it in practice.

      Here, the luminescent green is used for various details: logotype, language switches, slider controls, CTA, and even product imagery. However, it does not feel overpowering. The deal is that the dark environment saves the day by striking an excellent balance between these two.

      Example of Herba MojoDanilo Demarco

      All of the previously mentioned projects feature quite a limited amount of fluorescent color. It is mostly scattered throughout the entire page, looking like bright splashes. But what if you want to use it as a big, bold spot? No problem. Consider the personal portfolio of Danilo Demarco.

      Here, kryptonite green occupies the lion’s share of the homepage. However, surprisingly, it does not overwhelm visitors nor scare them away. It just works. It can be seen in every corner of UI. It enriches big and small headlines, hover states, the mouse cursor, content sections, block reveal effects, etc. It rules the roost here, and no one will complain about that.

      Example of Danilo DemarcoDesignGapp

      DesignGapp goes for a gorgeous pink that instantly reveals its beauty, placed against the bluish gradient-style background. It skillfully highlights the CTAs, playing nicely with the vibrant theme that runs through the entire website.

      Example of DesignGappKostumn 1

      Hot fuchsia and a fashion-related website are just destined to be together. It gives the design a sense of luxury and sophistication with ease. Even when used in tandem with a large font size, it does not ruin the overall harmony and looks pretty organic. It enriches titles in the slider and hover states of product names.

      Example of Kostumn 1Critical Techworks

      Critical Techworks employs one of the most iconic neon tones: rich turquoise. It looks outstanding inside a dark techno environment. All the details of the UI that are enriched with its charisma draw the attention from the outset.

      CTAs, text, headlines, elements of images and parts of the vehicle benefit from it. What’s more, there is an additional dark blue fluorescent tone. It nicely accompanies the turquoise, giving an extra edge to design.

      Example of Critical TechworksActive Theory

      Active Theory adopts the same solution as the previous example. The website also has a powerful techno vibe that oozes sophistication. Here, neon blue collaborates with line-style typography. Together these two make a sweet couple.

      That’s not all. There are some other fluorescent tones. They perfectly complete the theme, making the website feel one of a kind.

      Example of Active TheoryIntelligent Artifacts

      The homepage of Intelligent Artifacts gets the most out of a pair of neon colors: turquoise and purple. Again, they are used against an almost black canvas that reveals their refined, dark nature.

      They almost glow and shine, instantly striking an eye. As befits, they are used to bring extra attention to essential details of the website such as call-to-action buttons, logotype, navigation, and even images.

      Example of Intelligent ArtifactsWhoamama Design

      The team behind Whoamama Design has built their entire aesthetic around neon coloring. They are everywhere. The website feels electric, bizarre, and fancy. It is like a blast from the disco past.

      Some may consider it too much, while others may find all those eccentricities quite amusing. All in all, this unique oddness and singularity make the UI stand out from the crowd and bring attention towards the content.

      What else do you need in the harsh reality of fierce web competition? Sometimes you need to be a freak to get yourself heard.

      Example of Whoamama Designplayful and whimsical. When used with a black canvas and somber tones, they open their sophisticated and intricate nature.

      They also perfectly fit different types of websites. Whether it is a serious company that develops products based on AI or a community platform that seeks like-minded users, the neon color will find its place under the sun and makes its contribution to the user experience.


      Chasing like counts on social media and seeking approval from your friends are known to have a toll on your mental health and self-esteem. Instagram took it upon itself to curb such prevailing behavioral ramifications when it began hiding likes from posts for some of its users earlier this year. This change is being tested in seven countries worldwide, and the US will join the list as soon as this week.

      At the WIRED25 conference last week, Instagram CEO Adam Mosseri confirmed the test’s expansion to the US, where just a couple of hundred users may see the updated feed section with hidden like counts on posts. You’ll still get to see the likes on your own posts, it’s only your followers that won’t. We aren’t sure when (or if) the feature will get out of its test phase for general availability.

      WATCH: Instagram CEO Adam Mosseri announces that the platform will start hiding likes for US audiences starting next week. It’s the latest step in Instagram’s quest to become the safest place on the internet. #WIRED25

      — WIRED (@WIRED) November 9, 2019

      The US will follow markets like Australia, Brazil, Canada, Ireland, Italy, Japan, and New Zealand, where the test has been underway since April. Even Instagram’s parent Facebook is experimenting with a similar implementation that could benefit its billions of users.

      I can already hear social media influencers crying.




      Imagine a seesaw with a flamingo on one side and a grizzly bear on another. How would you ever stabilize them? That is how most digital marketers feel when they ask me to help balance out business-first decisions and brand safety. What does that mean? Simply put, it’s the natural and growing conflict between the need to increase profits or market share and ensuring that marketing and sales efforts don’t negatively impact the positive attitudes of prospects and customers toward the organization. Simpler yet, it’s the balancing of opportunity and risk in digital marketing and sales.

      Balancing out these strategic and operational issues can appear complicated at first glance. But the uncomplicated place where I tend to start with anyone who calls me is understanding the specific growth or market challenges facing the organization and defining digital policy and practices around sentiment analysis.

      Brands from any and all verticals use sentiment analysis to understand prospect and customer reactions, opinions and behaviors toward products or services. But while the analysis methodology has long been used to measure the latest social media campaign, it can be used as the foundation for your broader marketing and sales efforts, telling you exactly how far and fast you can push your efforts without damaging your brand. So why isn’t everyone jumping on the bandwagon? Should you take the leap? Let’s examine some of the intricacies of sentiment analysis to ensure you can proceed with eyes wide open.

      The challenge of quantifying reputational risk 

      It is straightforward to tie a one-off large-scale event to brand and reputation impact. Consider a news story about a data breach or an accessibility lawsuit impacting your organization. Obviously we can calculate the loss of revenue, cost of recovery, and potential legal liability. Weighed against the cost of mitigation, we can derive a clear understanding of the risk/benefit scenario and make a business decision on the most logical path forward. What is much harder to measure is how broadly and for how long the news stories will continue to cause trust issues and ill will with prospects and customers.

      What I’ve found to be successful is to gather all (or as many as possible) mentions of the organization across any and all channels (e.g., news, social media, TV, radio, customer service recordings, customer surveys, user purchasing history, etc.) and use a text and data analytics engine to measure sentiment. That means identifying and categorizing opinions expressed in a piece of text in order to determine if the attitude toward the organization is positive, negative, or neutral. By tracking organizational reputation (and brand) in key demographics and markets, we can develop a solid set of sentiments that can help us track risks that impact hard-to-measure things such as influence, trust, and leadership. This approach allows us to quantify a reputational baseline. Against that baseline, we can measure trends over time or at specific events, and leverage an agile methodology to test how aggressively we can market and sell before we start to get close to a decline in that influence, trust, and leadership area. In other words, we can tell how far we can push before we encounter brand risk and start to negatively impact our reputation.

      Getting the full picture

      Creating a picture of your organization’s reputational risk goes beyond understanding how the entity is viewed in the marketplace. It requires the identification and quantification of the reputation of your products as well as those of your suppliers. That means understanding your entire digital ecosystem and measuring its brand risk in the context of your organization, products, and services. For example, I have a client that was involved in the AWS data exposure incident earlier this year. While the AWS relationship with my client wasn’t known well publicly, it still had a (marginal) negative impact on the brand.

      Each vendor, agency or independent consultant is part of your ecosystem. So are boards of directors (past and present), brand ambassadors and influencers, and anyone else who touches your brand. You should map them all out and, based on a matrix of prioritization, determine who should be included in your full-picture analysis. After all, there is risk associated with each entity. Conversely, if any one of those entities is seen favorably, you can also benefit from such awareness and sentiment.

      Managing and capitalizing on event-based risks 

      Let’s continue this discussion with my AWS example. Understanding that there was a small, but real, brand risk, we decided with leadership to proactively reach out to users, and as news of the AWS breach began to spread, users were already informed of what the organization knows about the incident and what it was doing to ensure consumer data was protected. The reputational risk measurement indicated that we managed to contain the negative rollback on the organization’s brand. It also indicated to executives the level of effort to put into communicating around AWS and the incident in the future. Lastly, it allowed us to collectively understand what kind of risk we might have with AWS going forward and whether there was a return on investment (ROI) to be gained by moving to a different hosting environment.

      The same approach that we used to determine the AWS incident risk and mitigate against it, devising a good response plan, could be used in a number of other scenarios to understand marketing and sales options for your organization. Consider for a moment the latest YouTube advertising scandal. Your organization could perform the same analysis used for the AWS sentiment analysis to understand impact on competitors and other operating companies advertising on YouTube. Based on the negative brand impact (if any), you could better understand the type of risk your business could incur and proceed to use YouTube advertising or, conversely, stop advertising in that channel.

      Will you keep your finger on the pulse of brand safety?

      By using sentiment analysis, you can keep your finger on the pulse of your brand safety risk and dial your digital marketing and sales activity up or down as appropriate, thereby delivering on the business’ bottom line. You can also minimize your exposure to brand-damaging events. With a measured approach, you can best balance your opportunity and risk and develop a better approach to marketing and sales. Moreover, you can develop the type of digital policies that will unleash creativity and innovation in the organization while keeping the business safe.

      Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.

      About The Author

      Kristina is a digital policy innovator. For over two decades, she has worked with some of the most high-profile companies in the world and has helped them see policies as opportunities to free the organization from uncertainty, risk and internal confusion. Kristina has a BA in international studies and an MBA in international business from the Dominican University of California and is certified as both a change management practitioner (APMG International) and a project management professional (Project Management Institute). Her book, The Power of Digital Policy, was published in 2019.


      Instagram plans to start testing the removal of “Like” counts for U.S. users beginning as early as this week, CEO Adam Mosseri announced at a WIRED25 conference in San Francisco last week.

      Like counts will be removed from posts during the test.

      Why we should care

      In the tests that have rolled out so far, Instagram has hidden “Like” counts from public posts in the Feed, on the desktop platform and in profiles. Only the account owner is able to view the number of likes a post gets.

      The possibility that Instagram – a primary platform for influencer marketing – may potentially eliminate “Likes” could disrupt the influencer community, as the social proof metric goes private. Without an outward-facing metric such as “Likes,” influencers and their agencies will need to adapt to demonstrate value to brands, but it could prove to be a positive development, allowing both brands and influencers to shift to deeper-level engagement and outcome metrics beyond the low-hanging fruit of a “like.”

      More on the news

      • The Facebook-owned platform first began experimenting with hiding “Like” counts from users earlier this year. In April, an app researcher discovered Instagram was testing removing “Like” counts on posts. At the time, an Instagram spokesperson told TechCrunch it was not a public test, but an internal prototype and that the company was “exploring” new ways to reduce pressure on Instagram.
      • Later in the spring, Instagram formally began testing hidden like counts in Canada, and soon expanded the test to several other countries, including Brazil, Japan and Australia.
      • In September, Facebook confirmed it was experimenting with removing “Like” counts from Facebook posts for users in Australia.

      About The Author

      Taylor Peterson is Third Door Media’s Deputy Editor, managing industry-leading coverage that informs and inspires marketers. Based in New York, Taylor brings marketing expertise grounded in creative production and agency advertising for global brands. Taylor’s editorial focus blends digital marketing and creative strategy with topics like campaign management, emerging formats, and display advertising.


      With browsers cracking down on third-party cookies, how do advertisers deliver personalized digital ad creatives in a privacy- and browser-compliant fashion? Jivox, a company that specializes in dynamic digital advertising solutions, released a new identity technology Monday that it says can do just that.

      How it works. The new technology called Jivox IQiD takes a page from the hybrid cloud concept to enable brands to collect, control and use data without third-party cookies.

      “We have been trying to develop a solution that assumes third-party cookies would go away and that also adheres to the privacy concerns that are the reason that browsers are moving in that direction,” said Diaz Nesamoney, Founder and CEO, Jivox by phone last week. “Our technology allows the brand to collect consented data as the first-party — essentially as the brand’s domain … to collect data to stay in compliance.”

      Hybrid cloud technology separates the code or software from the data, which is owned and safeguarded by the business. Jivox saw that separation as a way to address the third-party cookie problem, said Nesamoney. With Jivox IQiD, collection, storage and use of data resides on the company’s domain, while the personalization software runs in the public cloud, he explained.

      Data is collected as a first-party when a consumer comes to a brand and consents to marketing via their preferences or activity such as searching on the site or buying something. The company explains that the technology “essentially acts like a brand’s CRM system.” The rest of the software that powers real-time personalization runs in the cloud.

      Jivox IQiD is available as part of the Jivox IQ Enterprise version. To use the technology, advertisers allow the data collection piece of the Jivox microservice software to run on their brand domains.

      Why we should care. Browsers are increasingly limiting or blocking access to third-party cookies to protect users from having their data collected as they’re followed around the web. Apple Safari’s Intelligent Tracking Prevention (ITP) was first, now Firefox is blocking third-party cookies and Google Chrome is moving in that direction. Fingerprinting, seen as another kind of silent tracking, is also being blocked. Desmoney says Jivox IQiD is not a workaround that the browsers will eventually shut down, but a long-term solution gives brands a privacy-safe way to use their consented data.

      Brands can use the data it collects in other marketing efforts beyond digital display advertising. Jivox supports email and website personalization, and advertisers can use the data to create custom audiences for Facebook campaigns, for example.

      Jivox Senior Vice President of Product and Engineering Shankar Venkataraman will present details about the technology on Tuesday at the IAB Tech Lab’s Innovation Day conference during a session entitled “No Cookies? Use Your First-Party Data to Drive Dynamic creative personalization.”

      About The Author

      Ginny Marvin is Third Door Media’s Editor-in-Chief, running the day to day editorial operations across all publications and overseeing paid media coverage. Ginny Marvin writes about paid digital advertising and analytics news and trends for Search Engine Land, Marketing Land and MarTech Today. With more than 15 years of marketing experience, Ginny has held both in-house and agency management positions. She can be found on Twitter as @ginnymarvin.


      Paged.js is an open-source library to paginate content in the browser. Based on the W3C specifications, it’s a sort of polyfill for Paged Media and Generated Content for Paged Media CSS modules. The development was launch as an open-source community driven initiative and it’s still experimental. The core team behind paged.js includes Adam Hyde, Julie Blanc, Fred Chasen & Julien Taquet.


      Until we have formal accessible documentation for paged.js, here is a list of links for those who would like to start using paged.js:

      You can also find below the features we are supporting right now. This text is an extract from the Editoria book.

      Page rules

      The page rules must be set up in the @print media query.

      @media print{


      The size of the pages in a book can be defined by either width and height (in inches or millimeters) or a paper size such as A5 or Letter. It must be the same for all the pages in the book and will be inferred only from the root @page.

      @page {
        size: A5;
      # or
      @page {
        size: 140mm 200mm;


      The margin command defines the top, bottom, left, and right areas around the page’s content.

      @page {
        margin: 1in 2in .5in 2in;


      Single pages or groups can be named, for instance as “cover” or “backmatter.” Named pages can have their own, more specific, styles and margins, and even different styles from the main rule.

      @page backmatter {
              margin: 20mm 30mm;
            background: yellow;

      In HTML, these page groups are defined by adding the page name to a CSS selector.

      section.backmatter {
            page: backmatter;

      Page selectors

      Blank pages

      The blank selector styles pages that have no content, e.g., pages automatically added to make sure a new chapter begins on the desired left or right page.

      @page :blank {
            @top-left { content: none; }

      First page and nth page

      There are selectors for styling the first page or a specific page, targeted by its number (named n in the specification).

      @page :first {
            background: yellow;
      @page :nth(5) {
            margin: 2in;

      Left and right or recto and verso

      Typically, pages across a spread (a pair of pages) have symmetrical margins and are centered on the gutter. If, however, the inner margin needs to be larger or smaller, the selector to style left and right pages can make that change.

      @page :left {
            margin-right: 2in;
      @page :right {
            margin-left: 2in;

      Margin boxes

      The margins of a page are divided into sixteen named boxes, each with its own border, padding, and content area. They’re set within the @page query. A box is named based on its position: for example, @top-left, @bottom-right-corner, or @left-middle (see all rules). By default, the size is determined by the page area. Margin boxes are typically used to display running headers, running footers, page numbers, and other content more likely to be found in a book than on a website. The content of the box is governed by CSS properties.

      To select these margin boxes and add content to them, use the following example:

      @page {
        @top-center {
              content: "Moby-Dick";

      Generated content

      CSS counters

      css-counter is a CSS property that lets you count elements within your content. For example, you might want to add a number before each figure caption. To do so, you would reset the counter for the , increment it any time a caption appears in the content, and display that number in a ::before pseudo-element.

      body {
        counter-reset: figureNumber;
      figcaption {
        counter-increment: figureNumber;
      figcaption::before {
        content: counter(figureNumber)

      Page-based counters

      To define page numbers, paged.js uses a CSS counter that gets incremented for each new page.

      To insert a page number on a page or retrieve the total number of pages in a document, the W3C proposes a specific counter named page. The counters declaration must be used within a content property in the margin-boxes declaration. The following example declares the page number in the bottom-left box:

      @page {
        @bottom-left {
              content: counter(page);

      You can also add a bit of text before the page number:

      @page {
        @bottom-left {
              content: "page " counter(page);

      To tally the total number of pages in your document, write this:

      @page {
        @bottom-left {
              content: counter(pages);

      Repeated elements on different pages

      Named string

      Named strings are used to create running headers and footers: they copy text for reuse in margin boxes.

      First, the text content of the element is cloned into a named string using string-set with a custom identifier (in the code below we call it “title,” but you can name it whatever makes sense as a variable). In the following example, each time a new

      appears in the HTML, the content of the named string gets updated with the text of that


      h1 {  string-set: title content(text) }

      Next, the string() function copies the value of a named string to the document, via the content property.

      @page {
        @bottom-left {
              content: string(title)

      Running elements

      Running elements are another way to create running headers and footers. Here the content, complete with style and structure, is copied from the text, assigned a custom identifier, and placed inside a margin box. This is useful for formatted text such as a word in italics.

      The element’s position is set:

      .title {
        position: running(title);

      Then it is placed into a margin box with the element() value via the content property:

      @page {
      @top-center {
      content: element(title)

      Controlling text fragmentation with page breaks

      Sometimes there is a need to define how content gets divided into pages based on markup. To do so, paged media specifications include break-before, beak-inside, and break-after properties.

      break-before adds a page break before the element; break-after adds a page break after the element.

      Here is the list of options:

      • break-before: page pushes the element (and the following content) to the next available page
      • break-before: right pushes the element to the next right page
      • break-before: left pushes the element to the next left page
      • break-before: recto pushes the element to the next recto page
      • break-before: verso pushes the element to the next verso page
      • break-before: avoid ensures that no page break appears between two specified elements

      For example, this sequence will create a page break before each h1 element:

      h1 {
            break-before: page;

      This code, in contrast, will push the h1 to the next right page, creating a blank page if needed:

      h1 {
          break-before: right;

      This snippet will keep any HTML element that comes after an h1 on the same page as the h1, moving them both to the next page if necessary.

      h1 {
            break-after: avoid;

      The last option is the break-inside property, which ensures that the element won’t be separated across multiple pages. If you want to be sure that your block quotes will never be divided, write this:

      blockquote {
          break-inside: avoid;


      To build items such as an index or a table of contents, the export function has to find the pages on which the relevant elements appear inside the book. To do so, paged media specifications include a target-counter property.

      For cross-references, links are used that target anchors in the book:

      see the Title of the chapter

      Later in the book, the chapter title will appear with the anchor, set using an ID property.

      title of the chapter

      The target-counter property is used in ::before and ::after pseudo-elements and set into the content property. As a page counter, it can include some text:

      a::after {
        content: ", page " target-counter(attr(href), page );

      In the PDF, this code will be rendered as “see title of the chapter, page 12”.

      There are several ways to extend the rendering of Paged.js. Selecting the best method will depend on how the code will be called and what it needs to access.

      When creating a script or library that is specifically aimed at extending the functionality of paged.js, it is best to use hooks and a handler class.

      Paged.js has various points in the parsing of content, transforming of CSS, rendering, and layout of HTML that you can hook into and make changes to before further code is run.

      A handler is a JavaScript class that defines functions that are called when a hook in Paged.js is ready to defer to your code. All of the core modules for support of paged media specifications and generated content are implemented as handlers. To create your own handler, you extend this same handler class.

      class MyHandler extends Paged.Handler {
          constructor(chunker, polisher, caller) {
              super(chunker, polisher, caller);

      The handler also exposes the underlying tools for fragmenting text (Chunker) and transforming CSS (Polisher)—see below.

      Within this class, you can define methods for each of the hooks, and specify when they will be run in the code. A return that is asynchronous will delay the next code using await.

      class MyHandler extends Paged.Handler {
          constructor(chunker, polisher, caller) {
              super(chunker, polisher, caller);
          afterPageLayout(pageFragment, page, breakToken) {
              console.log(pageFragment, page, breakToken);

      Paged.js contains the following asynchronous hooks:


      • beforeParsed(content runs on content before it is parsed and given IDs
      • afterParsed(parsed) runs after the content has been parsed but before rendering has started
      • beforePageLayout(page) runs when a new page has been created
      • afterPageLayout(pageElement, page, breakToken) runs after a single page has gone through layout, and allows adjusting the breakToken
      • afterRendered(pages) runs after all pages have finished rendering


      • beforeTreeParse(text, sheet) runs on the text of the style sheet
      • onUrl(urlNode) runs any time a CSS URL is parsed.
      • onAtPage(atPageNode) runs any time a CSS @page is parsed
      • onRule(ruleNode) runs any time a CSS rule is parsed
      • onDeclaration(declarationNode, ruleNode) runs any time a CSS declaration is parsed
      • onContent(contentNode, declarationNode, ruleNode) runs any time a CSS content declaration is parsed

      Finally, the new handler needs to be registed in order to be used.


      This can be registered anytime before the preview has started and will persist through any instances of Paged.Previewer that are created.

      If a JavaScript library, such as MathJax, needs access to the content before it is paginated, you can delay pagination until that script has completed its work. This will give the library full access to the content of the book but has the disadvantage of needing to render the entire book before rendering each page, which can cause a significant delay.

      Given that the polyfill will remove the page contents as soon as possible, adding a window.PagedConfig will allow you to pass a Promise that will delay until it is resolved.

      let promise = new Promise((resolve, reject) {
      window.PagedConfig = {
        before: () => {
            return promise;

      It is also possible to delay rendering of the polyfill until called by passing auto: false.

      window.PagedConfig = {
          auto: false

      When the Previewer class is used directly, the preview() method can be called at any point that is appropriate.


      Here’s how to host your office’s favorite weekly meeting

      Shirley Xiao

      Every Thursday at lunch in Indeed’s San Francisco office, I bring my design teammates together to let our imaginations run wild. We use this time to teach one another arts and crafts, watch design videos, or partake in some other creative endeavor. I call it Design Lunch Hour. Attendance is optional, but it’s become a quintessential part of our design team’s culture. And it’s played a big part in fostering our creative community.

      Design Lunch Hour is an opportunity for us to get inspired, share our love of creating, and take a break from the daily grind. Read on to learn how it started and get some ideas for launching one in your office.

      There’s so much to love about Design Lunch Hour. For one, it’s a chance for each of us to share our passions and skills with a community of likeminded people. As a self-taught designer who never went to art school, I haven’t always had that. Before joining a team of designers at Indeed, I seldom found people who shared my excitement over a design article, the arrival of SF Design Week, or a talk by one of my design heroes.

      But that didn’t stop me from looking for creative things to do. When I worked at startups as a solo designer, I found every opportunity to letter on the job. I would also spend hours after working decorating the office, doodling on everything in my path.

      So boy was I excited when I joined a whole team of designers at Indeed.

      My first step was starting a Slack channel called #designreads. Immediately the reactions and comments started pouring in. Seeing people geek out about design articles with me I knew I’d finally found a community that loved the things I did.

      Soon after, I discovered that our Austin office hosted a weekly event where designers watched design videos together. I determined to replicate it in San Francisco, and on October 25, 2017, Design Lunch Hour was born. We kicked off with the famous Netflix series Abstract: the Art of Design, which I’d never found time to watch. We were floored by the talent of the great illustrator Christoph Niemann.

      Christoph Niemann sits at a drawing desk in a Berlin park with a film crew setting up behind him.

      Christoph Niemann sits at a drawing desk in a Berlin park with a film crew setting up behind him.

      Christoph Niemann shares his creative process in Abstract: the Art of Design. Courtesy of Kase Film, Netflix

      We slowly progressed from design-specific videos to all forms of art. We touched on cooking (Chef’s Table), snowboarding (Yearning for Turning), and even tidying (Marie Kondo, anyone?) Over time, Design Lunch Hour took on the feel of an art party. Organically, people started to take turns suggesting and leading each week’s activity. We tried therapeutic watercoloring, still-life drawing sessions, knot-tying, stamp carving. Thanks to my fellow creatives, my view of the world of art and design continues to expand.

      Over the past two years, Indeed teams from Austin, Seattle, and Tokyo have all asked about our activities. So I’d like to share my ten favorites and hopefully spark some ideas.

      A surrealist favorite, this fun drawing game lets a group collaborate on a drawing without seeing the whole picture until it’s finished. Grab a stack of printer paper and fold each sheet into thirds. Get into teams of three and have each collaborator draw one part of a body — the head, torso, or legs — one after the other. See instructions.

      Pen sketch of a person walking while talking on a cell phone

      Pen sketch of a person walking while talking on a cell phone

      This exercise adapted from the IDEO book Thoughtless Acts? asks participants to go out onto the street. In small groups you sit and watch people for 30 minutes. Sketch, photograph, or note interesting behaviors you see people doing. Then regroup and share. You’ll be fascinated by what you find.

      Selection of still-life sketches taped next to each other on an office whiteboard

      Selection of still-life sketches taped next to each other on an office whiteboard

      Grab any objects available, set them in the middle of the table, and have at it! Who knew we had so many talented artists?

      Closeup of a person’s hands painting bananas and a pear in watercolors

      Closeup of a person’s hands painting bananas and a pear in watercolors

      A perennial favorite, watercoloring is by far the most meditative, relaxing exercise we do. That’s why we keep returning to it. All you need is some paint, brushes, foil, and paper. Remember not to rinse your brushes in your drinking water!

      Gif of coworkers in a conference room practicing hand-lettering by following along with an instructional video

      Gif of coworkers in a conference room practicing hand-lettering by following along with an instructional video

      There are so many hand-lettering tutorials out there that it’s easy to get started. To get started, print worksheets and bring in calligraphy brushes.

      For some advanced lettering, take a look at the weekly creative prompts artist Lauren Hom sends out. We chose week 47, make a card expressing love for yourself, and indulged in some wonderful self-affirmations.

      Montage of people in a conference room applying and showing off temporary tattoos

      Montage of people in a conference room applying and showing off temporary tattoos

      For those that have always been tempted to get a tattoo, Inkbox is the perfect way to try out different designs. Like a black henna, it lasts on your skin for up to two weeks. I brought two boxes to work and tried my hand at becoming a tattoo artist. This was by far our most popular event.

      When you’re always designing on the screen, it’s refreshing to just make things with your hands. Grab some Play-Doh, or in our case EZ Shape Modeling Clay, and let your imagination run wild.

      Closeup photo of a hand-lettered father’s day card

      Closeup photo of a hand-lettered father’s day card

      It’s easy to get caught up in work and forget the little things. It had been ages since I’d designed a personal card for my dad. When a fellow designer brought in postcards, stamps, and markers, I was grateful to have time at work to show some love and appreciation for family.

      If you’re feeling inspired, I leave you with three tips I learned along the way for starting your own Design Lunch Hour:

      Try everything

      Why not? Here’s where to test new skills, subjects, mediums, ideas, and even conference rooms.

      Include everyone

      We often invite visitors from other offices to join us, and we’ve started to see engineers and others from teams outside of design want to join in. Use polls to gauge interest for different activities and empower anyone with an idea to run their own session.

      Show off a little

      Hang drawings. Post photos. Share what you’ve created, learned, and watched on social media. Let the world see how amazing, talented, and multidimensional everyone on your team is.

      Marker sketch of a llama-like creature with a long horn

      Marker sketch of a llama-like creature with a long horn

      I think all creatives find activities of their own to stay inspired. It’s how we maintain a fresh eye and keep our problem-solving skills sharp so we can do our best work. Tools and technology are wonderful things but they can’t give you the creative energy a band of open-minded souls will. So give it a try.

      • List of creative exercises for creative teams from the Foursquare design team
      • 28 to Make, 28 daily creative project ideas to get you in the habit of making
      • Draft and Draw, a monthly meeting for drinkers with a drawing problem, featuring a “creative professional who shares the story of coping with their habit.”

      Shoutout to the SF design team, thanks for being the best community a designer can ask for. ❤️


      Even if you don’t use Firefox as your web browser of choice, there’s no denying that it has profoundly impacted the Web over its lifetime. Just how long has its lifetime been, you ask? Well, as of today, Firefox is 15 years old.

      Firefox 1.0 was released on this day (November 9th) in 2004, two years after the first public builds became available under the name “Phoenix.” The browser’s lineage actually dates back much farther than that, as Firefox was an open-source continuation of Netscape Navigator, which had its first initial release in 1994.

      A lot has changed in the last 15 years.

      What hasn’t is our commitment to creating an open, diverse and secure internet.

      — Firefox 🔥 (@firefox) November 8, 2019

      🎊 Happy 15h birthday @firefox 🎊

      Firefox 1.0 was released on November 9th, 2004. The scrappy alternative to Internet Explorer 6 (☠️) had revolutionary features such as tabbed browsing (!), popup blocking, themes, and extensions. 🔥

      It changed the world. Really, it did 👏👏

      — Changelog (@changelog) November 9, 2019

      Firefox has seen accelerated development over the past few years, especially with ‘Project Quantum,’ the ongoing effort to rewrite parts of the engine in the super-fast Rust programming language. The Android version is undergoing a complete transformation at the moment, currently available as ‘Firefox Preview‘ on the Play Store.

      Happy birthday Firefox, and here’s to the next 15 years! 🎂

      Firefox Browser: fast , private & secure browsing

      Firefox Browser: fast , private & secure browsing


      Designing bespoke and standardised visual systems for the public which serve hundreds of thousands daily commuters every day is an incredibly exciting brief to receive as a designer. For the Copenhagen Metro’s Cityringen, a 17-station loop line for the Danish capital that opened on 29 September this year, Kontrapunkt’s two heads of type design, Torsten Lindsø Andersen and Rasmus Michaëlis, were tasked with designing one key visual element for the line: a custom dot typeface for the electronic in-train displays.

      The pair started working together after meeting at The Royal Danish Academy of Fine Arts, School of Design in 2014. While running a small studio called ABC for the following three years, the two frequently collaborated on custom typeface designs alongside Kontrapunk and its founder Bo Linnemann, who eventually, invited the duo to join the company, later becoming joint heads of type design. “We only do bespoke type design, so on a practical level, this means that we are always designing for and together with others,” the pair tells It’s Nice That. “A bespoke typeface is the perfect polyglot. It can cross borders and bridge cultures. It speaks multiple languages, but its voice and personality stay the same.”

      The in-train displays aren’t Kontrapunkt’s first involvement with the Cityringen – it was also involved with redesigning the service map to fit seamlessly with existing lines as well as the signage in the new line. However, Torsten and Rasmus was given the unique challenge of designing for the large by only using the small. “Typically, when we draw letters, we work with a vertical grid height of at least 1000 units. In this case, we only had 24!” they explain.

      The roots for the typeface came as early as the original plans for the metro – starting the design seven years ago while the tunnels were being dug and the stations still under construction. For the original identity of Copenhagen’s initial metro lines, a 14-dot version of the highly legible sans serif Frutiger was developed but never used, and part of Torsten and Rasmus’ challenge was to update it for the new trains’ 16-dot displays while solving the shortcomings of the old design.

      There were several technical aspects of the brief that the designers had to fulfil. Firstly, the font had to fill a 24-by-192 LED dot matrix display, secondly, it had to be an interpretation of Frutiger Bold that’s also used for the physical signs of the Metro while also fitting “Poul Henningsens Plads,” (the longest station name on the line) without ticker-text. “When you design letters on such a low-resolution grid you quickly realise how difficult it is to draw curvatures and angled strokes,” Torsten and Rasmus explain. “Take the lowercase ‘k’ for example – having two diagonal strokes within the x-height is not easy to get pretty. Either it becomes too heavy, too light, or too long.”

      These specific technical challenges pushed them to think about typefaces in a different way, utilising techniques and concepts that they wouldn’t have focused on when designing a standard typeface. “We went to great effort to keep the counters as open as possible as well as ensuring a harmonic stroke-width and stem-width and by all means, avoid what we call ‘orphan LEDs’,” the designers say. Part of the goal is to create the most distinctive shape that does not muddle the LEDs at a distance, “especially when glow or light diffusion is a factor as well.”

      The pair, extremely motivated by the democratisation of type design, are “on a never-ending quest to fight typographic aesthetic monoculture.” What this means, of course, is that typeface design should never be boring, too self-referential and insular. Having about three typeface projects ongoing simultaneously at Kontrapunkt, the two designers are not shy at all in picking new challenges like this typeface brief for Copenhagen’s Cityringen and public.


      In the Latin world there have been many endeavours to anatomize the shapes of letters, from Albrecht Dürer’s geometry of capitals in the 16th century to Stephen Coles’s excellent book, The Anatomy of Type, as recently as 2012.

      While Greek uppercase, at least as far as fonts are concerned, can be analysed on the same basis as Latin, the lowercase has often been regarded as somewhat ‘exotic’. Thus Western type designers and scholars have shied away from anatomical terms in favour of labelling entire models after their creators without significant analysis of their construction and aesthetic relationships.

      There are many professions that need terminology for everyday purposes. We can identify at least these: typeface designers, typographic designers, calligraphers, teachers of handwriting for children, design teachers, printers, graphic designers, palaeographers, etc. The difficulty has been getting all the modern groups to agree to use a standard set of terms. Almost all printing terms and glossaries of typographic terms in Greek nowadays have been established by non-Greeks, took place outside of Greece, and have been simply translated into Greek language.*

      Glossary of Terms, Greek Letters: From Tablets to Pixels, Michael S. Macrakis (Oak Knoll Press, 1996).

      Glossary of Terms, Greek Letters: From Tablets to Pixels, Michael S. Macrakis (Oak Knoll Press, 1996).

      In the 1990s there were two significant conferences about Greek typography: in 1992 the “DIDOT Program” in Thessaloniki, and in 1995 “Greek letters: from tablets to pixels” in Athens. In both conferences there were attempts to establish a typographic glossary in Greek. The latter gave us, in the appendix of the proceedings publication, a “glossary of terms” with the following note of caution:

      “It must be borne in mind that typographic terms have been introduced and used by practitioners with little if any theoretical background. As a consequence the terms that have survived are often of unknown origin (as is ‘serif’, say) and if the origin is known the evolution of the term in use bears little resemblance to its root (as with ‘punch’). One might be tempted to introduce modern terminology that is consistent and accurate. However, the three dimensional character has been supplanted by the two dimensional pixelography. Many terms survive, however, and we list a few that are encountered in the text.”

      Around 2006 there was a concerted effort to agree on terminology, especially on the internet forum Hegrade. A significant number of terms commonly used in English were presented alongside suggestions for Greek terms. The discussion petered out, with, for example, four terms for serif** — some everyday, some arcane — with no agreement as to which was to be recommended. This is the situation that remains today.

      So I will list here the terms that seem to be in common use in graphic design schools, with the caution that these terms might never have been used by printers or calligraphers.