The MediaStream Recording API makes it easy to record audio and/or video streams. When used with MediaDevices.getUserMedia(), it provides an easy way to record media from the user’s input devices and instantly use the result in web apps. This article shows how to use these technologies to create a fun dictaphone app.

A sample application: Web Dictaphone

An image of the Web dictaphone sample app - a sine wave sound visualisation, then record and stop buttons, then an audio jukebox of recorded tracks that can be played back.

To demonstrate basic usage of the MediaRecorder API, we have built a web-based dictaphone. It allows you to record snippets of audio and then play them back. It even gives you a visualisation of your device’s sound input, using the Web Audio API. We’ll just concentrate on the recording and playback functionality in this article, for brevity’s sake.

You can see this demo running live, or grab the source code on GitHub. This has pretty good support on modern desktop browsers, but pretty patchy support on mobile browsers currently.

Basic app setup

To grab the media stream we want to capture, we use getUserMedia(). We then use the MediaRecorder API to record the stream, and output each recorded snippet into the source of a generated element so it can be played back.

We’ll first declare some variables for the record and stop buttons, and the

that will contain the generated audio players:

const record = document.querySelector('.record');
const stop = document.querySelector('.stop');
const soundClips = document.querySelector('.sound-clips');

Next, we set up the basic getUserMedia structure:

if (navigator.mediaDevices && navigator.mediaDevices.getUserMedia) {
   console.log('getUserMedia supported.');
   navigator.mediaDevices.getUserMedia (
      // constraints - only audio needed for this app
         audio: true

      // Success callback
      .then(function(stream) {


      // Error callback
      .catch(function(err) {
         console.log('The following `getUserMedia` error occured: '   err);
} else {
   console.log('getUserMedia not supported on your browser!');

The whole thing is wrapped in a test that checks whether getUserMedia is supported before running anything else. Next, we call getUserMedia() and inside it define:

  • The constraints: Only audio is to be captured for our dictaphone.
  • The success callback: This code is run once the getUserMedia call has been completed successfully.
  • The error/failure callback: The code is run if the getUserMedia call fails for whatever reason.

Note: All of the code below is found inside the getUserMedia success callback in the finished version.

Capturing the media stream

Once getUserMedia has created a media stream successfully, you create a new Media Recorder instance with the MediaRecorder() constructor and pass it the stream directly. This is your entry point into using the MediaRecorder API — the stream is now ready to be captured into a , in the default encoding format of your browser.

const mediaRecorder = new MediaRecorder(stream);

There are a series of methods available in the MediaRecorder interface that allow you to control recording of the media stream; in Web Dictaphone we just make use of two, and listen to some events. First of all, MediaRecorder.start() is used to start recording the stream once the record button is pressed:

record.onclick = function() {
  console.log("recorder started"); = "red"; = "black";

When the MediaRecorder is recording, the MediaRecorder.state property will return a value of “recording”.

As recording progresses, we need to collect the audio data. We register an event handler to do this using mediaRecorder.ondataavailable:

let chunks = [];

mediaRecorder.ondataavailable = function(e) {

Last, we use the MediaRecorder.stop() method to stop the recording when the stop button is pressed, and finalize the Blob ready for use somewhere else in our application.

stop.onclick = function() {
  console.log("recorder stopped"); = ""; = "";

Note that the recording may also stop naturally if the media stream ends (e.g. if you were grabbing a song track and the track ended, or the user stopped sharing their microphone).

Grabbing and using the blob

When recording has stopped, the state property returns a value of “inactive”, and a stop event is fired. We register an event handler for this using mediaRecorder.onstop, and construct our blob there from all the chunks we have received:

mediaRecorder.onstop = function(e) {
  console.log("recorder stopped");

  const clipName = prompt('Enter a name for your sound clip');

  const clipContainer = document.createElement('article');
  const clipLabel = document.createElement('p');
  const audio = document.createElement('audio');
  const deleteButton = document.createElement('button');

  audio.setAttribute('controls', '');
  deleteButton.innerHTML = "Delete";
  clipLabel.innerHTML = clipName;


  const blob = new Blob(chunks, { 'type' : 'audio/ogg; codecs=opus' });
  chunks = [];
  const audioURL = window.URL.createObjectURL(blob);
  audio.src = audioURL;

  deleteButton.onclick = function(e) {
    let evtTgt =;

Let’s go through the above code and look at what’s happening.

First, we display a prompt asking the user to name their clip.

Next, we create an HTML structure like the following, inserting it into our clip container, which is an


_your clip name_

After that, we create a combined Blob out of the recorded audio chunks, and create an object URL pointing to it, using window.URL.createObjectURL(blob). We then set the value of the element’s src attribute to the object URL, so that when the play button is pressed on the audio player, it will play the Blob.

Finally, we set an onclick handler on the delete button to be a function that deletes the whole clip HTML structure.

So that’s basically it — we have a rough and ready dictaphone. Have fun recording those Christmas jingles! As a reminder, you can find the source code, and see it running live, on the MDN GitHub.

This article is based on Using the MediaStream Recording API by Mozilla Contributors, and is licensed under CC-BY-SA 2.5.


Tutorial Physics with cannonjs and three.js

From our sponsor: Design every part of your website with the Divi Theme Builder.

Yeah, shaders are good but have you ever heard of physics?

Nowadays, modern browsers are able to run an entire game in 2D or 3D. It means we can push the boundaries of modern web experiences to a more engaging level. The recent portfolio of Bruno Simon, in which you can play a toy car, is the perfect example of that new kind of playful experience. He used Cannon.js and Three.js but there are other physics libraries like Ammo.js or Oimo.js for 3D rendering, or Matter.js for 2D. 

In this tutorial, we’ll see how to use Cannon.js as a physics engine and render it with Three.js in a list of elements within the DOM. I’ll assume you are comfortable with Three.js and know how to set up a complete scene.

Prepare the DOM

This part is optional but I like to manage my JS with HTML or CSS. We just need the list of elements in our nav:

Prepare the scene

Let’s have a look at the important bits. In my Class, I call a method “setup” to init all my components. The other method we need to check is “setCamera” in which I use an Orthographic Camera with a distance of 15. The distance is important because all of our variables we’ll use further are based on this scale. You don’t want to work with too big numbers in order to keep it simple.

// Scene.js

import Menu from "./Menu";

// ...

export default class Scene {
    // ...
    setup() {
        // Set Three components
        this.scene = new THREE.Scene()
        this.scene.fog = new THREE.Fog(0x202533, -1, 100)

        this.clock = new THREE.Clock()

        // Set options of our scene


        this.renderer.setAnimationLoop(() => { this.draw() })


    setCamera() {
        const aspect = window.innerWidth / window.innerHeight
        const distance = 15 = new THREE.OrthographicCamera(-distance * aspect, distance * aspect, distance, -distance, -1, 100), 10, 10) THREE.Vector3())

    draw() {

    addObjects() { = new Menu(this.scene)

    // ...

Create the visible menu

Basically, we will parse all our elements in our menu, create a group in which we will initiate a new mesh for each letter at the origin position. As we’ll see later, we’ll manage the position and rotation of our mesh based on its rigid body.

If you don’t know how creating text in Three.js works, I encourage you to read the documentation. Moreover, if you want to use a custom font, you should check out facetype.js.

In my case, I’m loading a Typeface JSON file.

// Menu.js

export default class Menu {
  constructor(scene) {
    // DOM elements
    this.$navItems = document.querySelectorAll(".mainNav a");

    // Three components
    this.scene = scene;
    this.loader = new THREE.FontLoader();

    // Constants
    this.words = [];

    this.loader.load(fontURL, f => {

  setup(f) {

    // These options give us a more candy-ish render on the font
    const fontOption = {
      font: f,
      size: 3,
      height: 0.4,
      curveSegments: 24,
      bevelEnabled: true,
      bevelThickness: 0.9,
      bevelSize: 0.3,
      bevelOffset: 0,
      bevelSegments: 10

    // For each element in the menu...
      .forEach(($item, i) => {
        // ... get the text ...
        const { innerText } = $item;

        const words = new THREE.Group();

        // ... and parse each letter to generate a mesh
        Array.from(innerText).forEach((letter, j) => {
          const material = new THREE.MeshPhongMaterial({ color: 0x97df5e });
          const geometry = new THREE.TextBufferGeometry(letter, fontOption);

          const mesh = new THREE.Mesh(geometry, material);


Building a physical world

Cannon.js uses the loop of render of Three.js to calculate the forces that rigid bodies sustain between each frame. We decide to set a global force you probably already know: gravity.

// Scene.js

import C from 'cannon'

// …

setup() {
    // Init Physics world = new C.World(), -50, 0)

    // … 

// … 

addObjects() {
    // We now need to pass the world of physic as an argument = new Menu(this.scene,;

draw() {
    // Create our method to update the physic


updatePhysics() {
    // We need this to synchronize three meshes and Cannon.js rigid bodies

    // As simple as that! / 60);

// …

As you see, we set the gravity of -50 on the Y-axis. It means that all our bodies will undergo a force of -50 each frame to the infinite until they encounter another body or the floor. Notice that if we change the scale of our elements or the distance number of our camera, we need to also adjust the gravity number.

Rigid bodies

Rigid bodies are simpler invisible shapes used to represent our meshes in the physical world. Usually, their meshes are way more elementary than our rendered mesh because the fewer vertices we have to calculate, the faster it is.

Note that “soft bodies” also exist. It represents all the bodies that undergo a distortion of their mesh because of other forces (like other objects pushing them or simply gravity affecting them).

For our purpose, we will create a simple box for each letter of their size, and place them in the correct position. 

There are a lot of things to update in Menu.js so let’s look at every part.

First, we need two more constants:

// Menu.js

// It will calculate the Y offset between each element.
const margin = 6;
// And this constant is to keep the same total mass on each word. We don't want a small word to be lighter than the others. 
const totalMass = 1;

The totalMass will involve the friction on the ground and the force we’ll apply later. At this moment, “1” is enough.

// …

export default class Menu {
    constructor(scene, world) {
        // … = world
        this.offset = this.$navItems.length * margin * 0.5;

  setup(f) {
        // … 
        Array.from(this.$navItems).reverse().forEach(($item, i) => {
            // … 
            words.letterOff = 0;

            Array.from(innerText).forEach((letter, j) => {
                const material = new THREE.MeshPhongMaterial({ color: 0x97df5e });
                const geometry = new THREE.TextBufferGeometry(letter, fontOption);


                const mesh = new THREE.Mesh(geometry, material);
                // Get size of our entire mesh
                mesh.size = mesh.geometry.boundingBox.getSize(new THREE.Vector3());

                // We'll use this accumulator to get the offset of each letter. Notice that this is not perfect because each character of each font has specific kerning.
                words.letterOff  = mesh.size.x;

                // Create the shape of our letter
                // Note that we need to scale down our geometry because of Box's Cannon.js class setup
                const box = new C.Box(new C.Vec3().copy(mesh.size).scale(0.5));

                // Attach the body directly to the mesh
                mesh.body = new C.Body({
                    // We divide the totalmass by the length of the string to have a common weight for each words.
                    mass: totalMass / innerText.length,
                    position: new C.Vec3(words.letterOff, this.getOffsetY(i), 0)

                // Add the shape to the body and offset it to match the center of our mesh
                const { center } = mesh.geometry.boundingSphere;
                mesh.body.addShape(box, new C.Vec3(center.x, center.y, center.z));
                // Add the body to our world

            // Recenter each body based on the whole string.
            words.children.forEach(letter => {
                letter.body.position.x -= letter.size.x   words.letterOff * 0.5;

            // Same as before

    // Function that return the exact offset to center our menu in the scene
    getOffsetY(i) {
        return (this.$navItems.length - i - 1) * margin - this.offset;

    // ...


You should have your menu centered in your scene, falling to the infinite and beyond. Let’s create the ground of each element of our menu in our words loop:

// …

words.ground = new C.Body({
    mass: 0,
    shape: new C.Box(new C.Vec3(50, 0.1, 50)),
    position: new C.Vec3(0, i * margin - this.offset, 0)

// … 

A shape called “Plane” exists in Cannon. It represents a mathematical plane, facing up the Z-axis and usually used as ground. Unfortunately, it doesn’t work with superposed grounds. Using a box is probably the easiest way to make the ground in this case.

Interaction with the physical world

We have an entire world of physics beneath our fingers but how to interact with it?

We calculate the mouse position and on each click, cast a ray (raycaster) towards our camera. It will return the objects the ray is passing through with more information, like the contact point but also the face and its normal.

Normals are perpendicular vectors of each vertex and faces of a mesh:

We will get the clicked face, get the normal and reverse and multiply by a constant we have defined. Finally, we’ll apply this vector to our clicked body to give an impulse.

To make it easier to understand and read, we will pass a 3rd argument to our menu, the camera.

// Scene.js = new Menu(this.scene,,;
// Menu.js
// A new constant for our global force on click
const force = 25;

constructor(scene, world, camera) { = camera;

    this.mouse = new THREE.Vector2();
    this.raycaster = new THREE.Raycaster();

    // Bind events
    document.addEventListener("click", () => { this.onClick(); });
    window.addEventListener("mousemove", e => { this.onMouseMove(e); });

onMouseMove(event) {
    // We set the normalized coordinate of the mouse
    this.mouse.x = (event.clientX / window.innerWidth) * 2 - 1;
    this.mouse.y = -(event.clientY / window.innerHeight) * 2   1;

onClick() {
    // update the picking ray with the camera and mouse position

    // calculate objects intersecting the picking ray
    // It will return an array with intersecting objects
    const intersects = this.raycaster.intersectObjects(

    if (intersects.length > 0) {
        const obj = intersects[0];
        const { object, face } = obj;

        if (!object.isMesh) return;

        const impulse = new THREE.Vector3()

        this.words.forEach((word, i) => {
            word.children.forEach(letter => {
                const { body } = letter;

                if (letter !== object) return;

                // We apply the vector 'impulse' on the base of our body
                body.applyLocalImpulse(impulse, new C.Vec3());

Constraints and connections

As you can see at the moment, you can punch each letter like the superman or superwoman you are. But even if this is already looking cool, we can still do better by connecting every letter between them. In Cannon, it’s called constraints. This is probably the most satisfying thing with using physics.

// Menu.js

setup() {
    // At the end of this method

setConstraints() {
    this.words.forEach(word => {
        for (let i = 0; i < word.children.length; i  ) {
        // We get the current letter and the next letter (if it's not the penultimate)
        const letter = word.children[i];
        const nextLetter =
            i === word.children.length - 1 ? null : word.children[i   1];

        if (!nextLetter) continue;

        // I choosed ConeTwistConstraint because it's more rigid that other constraints and it goes well for my purpose
        const c = new C.ConeTwistConstraint(letter.body, nextLetter.body, {
            pivotA: new C.Vec3(letter.size.x, 0, 0),
            pivotB: new C.Vec3(0, 0, 0)

        // Optionnal but it gives us a more realistic render in my opinion
        c.collideConnected = true;;

To correctly explain how these pivots work, check out the following figure:

(letter.mesh.size, 0, 0) is the origin of the next letter.

Remove the sandpaper on the floor

As you have probably noticed, it seems like our ground is made of sandpaper. That’s something we can change. In Cannon, there are materials just like in Three. Except that these materials are physic-based. Basically, in a material, you can set the friction and the restitution of a material. Are our letters made of rock, or rubber? Or are they maybe slippy? 

Moreover, we can define the contact material. It means that if I want my letters to be slippy between each other but bouncy with the ground, I could do that. In our case, we want a letter to slip when we punch it.

// In the beginning of my setup method I declare these
const groundMat = new C.Material();
const letterMat = new C.Material();

const contactMaterial = new C.ContactMaterial(groundMat, letterMat, {
    friction: 0.01

Then we set the materials to their respective bodies:

// ...
words.ground = new C.Body({
    mass: 0,
    shape: new C.Box(new C.Vec3(50, 0.1, 50)),
    position: new C.Vec3(0, i * margin - this.offset, 0),
    material: groundMat
// ...
mesh.body = new C.Body({
    mass: totalMass / innerText.length,
    position: new C.Vec3(words.letterOff, this.getOffsetY(i), 0),
    material: letterMat
// ...

Tada! You can push it like the Rocky you are.

Final words

I hope you have enjoyed this tutorial! I have the feeling that we’ve reached the point where we can push interfaces to behave more realistically and be more playful and enjoyable. Today we’ve explored a physics-powered menu that reacts to forces using Cannon.js and Three.js. We can also think of other use cases, like images that behave like cloth and get distorted by a click or similar.

Cannon.js is very powerful. I encourage you to check out all the examples, share, comment and give some love and don’t forget to check out all the demos!


Building a search engine from scratch

A whirlwind tour of the big ideas powering our web search

December 6th, 2019

The previous blog post in this series explored our journey so far in building an independent, alternative search engine. If you haven’t read it yet, we would highly recommend checking it out first!

It is no secret that Google search is one of the most lucrative businesses on the planet. With quarterly revenues of Alphabet Inc. exceeding $40 Billion[1] and a big portion of that driven by the advertising revenue on Google’s search properties, it might be a little surprising to see the lack of competition to Google in this area[2]. We at Cliqz believe that this is partly due to the web search bootstrapping problem: the entry barriers in this field are so massive that the biggest, most successful companies in the world with the resources to tackle the problem shy away from it. This post attempts to detail the bootstrapping problem and explain the Cliqz approach to overcoming it. But let us first start by defining the search problem.

The expectation for a modern web search engine is to be able to answer any user question with the most relevant documents that exist for the topic on the internet. The search engine is also expected to be blazingly fast, but we can ignore that for the time being. At the risk of gross oversimplification, we can define the web search task as computing the content match of each candidate document with respect to the user question (query), computing the current popularity of the document and combining these scores with some heuristic.

The content match score measures how well a given document matches a given query. This could be as simple as an exact keyword match, where the score is proportional to the number of query words present in the document:

query avengers endgame
document avengers endgame imdb

If we could score all our documents this way, filter the ones that contain all the query words and sort them based on some popularity measure, we would have a functioning, albeit toy, search engine. Let us look at the challenges involved in building a system capable of handling just the exact keyword match scenario at a web scale, which is a bare minimum requirement of a modern search engine.

According to a study published on, a conservative estimate of the number of documents indexed by Google is around 60 Billion.

1. The infrastructure costs involved in serving a massive, constantly updating inverted index at scale.

Considering just the text content of these documents, this represents at least a petabyte of data. A linear scan through these documents is technically not feasible, so a well understood solution to this problem is to build an inverted index. The big cloud providers like Amazon, Google or Microsoft are able to provide us with the infrastructure needed to serve this system, but it is still going to cost millions of euros each year to operate. And remember, this is just to get started.

2. The engineering costs involved in crawling and sanitizing the web at scale.

The crawler[3] infrastructure needed to keep this data up to date while detecting newer documents on the internet is another massive hurdle. The crawler needs to be polite (some form of domain level rate-limiting), be geographically distributed, handle multilingual data and aggressively avoid link farms and spider traps[4]. A huge portion of the crawlable[5] web is spam and duplicate content; sanitizing this data is another big engineering effort.

Also, a significant portion of the web is cut off from you if your crawler is not famous. Google has a huge competitive advantage in this regard, a lot of site owners allow just GoogleBot (and maybe BingBot), making it an extremely tedious process for an unknown crawler to get whitelisted on these sites. We would have to handle these on a case-by-case basis knowing that getting the attention of the sites is not guaranteed.

3. You need users to get more users (Catch-22)

Even assuming we manage to index and serve these pages, measuring and improving the search relevance is a challenge. Manual evaluation of search results can help get us started, but we would need real users to measure changes in search relevance in order to be competitive.

4. Finding the relevant pages amongst all the noise on the web.

The biggest challenge in search, though, is the removal of noise. The Web is so vast that any query can be answered. The ability to discard the noise in the process makes all the difference between a useless search engine and a great one. We discussed this topic with some rigor in our previous post, providing a rationale for why using query logs is a smarter way to cut through the noise on the web. We also wrote in depth about how to collect these logs in a responsible manner using Human Web. Feel free to check these posts out for further information.

Query/URL pairs, typically referred to as query logs, are often used by search engines to optimize their ranking and for SEO to optimize incoming traffic. Here is a sample from the AOL query logs dataset[6].

Query Clicked Url
college savings plan
pennsylvania college savings plan
pennsylvania college savings plan

We can use these query logs to build a model of the page outside of its content, which we refer to as page models. The example below comes from a truncated version of the page model that we have at the moment for one particular CNN article on Tesla’s Cybertruck launch. The scores associated with the query are computed as a function of its frequency (i.e. the number of times the query/URL pair was seen in our logs) and its recency (i.e. recently generated query logs are a better predictor for relevance).

  "queries": [
      "tesla cybertruck",
      "tesla truck",
      "new tesla car",
      "pick up tesla",
      "new tesla truck",
      "cyber truck tesla price",
      "how much is the cybertruck",
      "cybertruck unveiling",
      "new tesla cybertruck",
      "cyber truck unveiling",

We have hundreds of queries on the page, but even this small sample should provide you with an intuition on how a page model helps us summarize and understand the contents of the page. Even without the actual page text, the page model suggests that the article is about a new Tesla car called the Cybertruck; it details an unveiling event and it contains potential pricing information.

The more unique queries we can gather for a page, the better our model for the page will be. Our use of Human Web also enables us to collect anonymous statistics on the page, a part of which is shown below. This structure shows the popularity of the page in different countries at this moment in time, which is used as a popularity signal. We can see that it is very popular in the UK, less so is Australia, etc.

"counters": {
  "ar": 0.003380009657170449,
  "at": 0.016900048285852245,
  "au": 0.11492032834379527,
  "be": 0.02704007725736359,
  "br": 0.012071463061323033,
  "cn": 0.0014485755673587638,
  "cz": 0.008691453404152583,
  "de": 0.06422018348623854,
  "dk": 0.028971511347175277,
  "fr": 0.025108643167551906,
  "gb": 0.3355866731047803,
  "it": 0.00772573635924674,
  "jp": 0.005311443746982134,
  "ru": 0.0159343312409464,
  "se": 0.0294543698696282,
  "ua": 0.012071463061323033

Now that we understand how page models are generated, we can start stepping through the search process. We can break down this process into different stages, as described below.

The TL;DR Version

This is a high level overview if you want to know how Cliqz search is different.

  1. Our model of a web page is based on queries only. These queries could either be observed in the query logs or could be synthetic, i.e. we generate them. In other words, during the recall phase, we do not try to match query words directly with the content of the page. This is a crucial differentiating factor – it is the reason we are able to build a search engine with dramatically less resources in comparison to our competitors.
  2. Given a query, we first look for similar queries using a multitude of keyword and word vector based matching techniques.
  3. We pick the most similar queries and fetch the pages associated with them.
  4. At this point, we start considering the content of the page. We utilize it for feature extraction during ranking, filtering and dynamic snippet generation.

1. The Query Correction Stage

When the user enters a query into our search box, we first perform some potential corrections on it. This not only involves some normalization, but also expansions and spell corrections, if necessary. This is handled by a service which is called suggest – we will have a post detailing its inner workings soon. For now, we can assume that the service provides us with a list of possible alternatives to the user query.

2. The Recall and Precision Stages

We can now start building the core of the search engine. Our index contains billions of pages; the job of the search is to find the N, usually around 50, most relevant pages for a given query. We can broadly split this problem into 2 parts, the recall and the precision stages.

The recall stage involves narrowing down the billions of pages to a much smaller set of, say, five hundred candidate pages, while trying to get as many relevant pages as possible into this set. The precision stage performs more intensive checks on these candidate pages to filter the top N pages and decide on the final ordering.

2.1 Recall Stage

A common technique used to perform efficient retrieval in search is to build an inverted index. Rather than building one with the words on the page as keys, we use ngrams of the queries on the page model as keys. This let’s us build a much smaller and less noisy index.

We can perform various types of matching on this index:

  • Word based exact match: We look for the exact user query in our index and retrieve the linked pages.
  • Word based partial match: We split the user query into ngrams and retrieve the linked pages for each of these.
  • Synonym and stemming based partial match: We stem the words in the user query and retrieve the linked pages for each of its ngrams. We could also replace words in the query with their synonyms and repeat the operation. It should be clear that this approach, if not used with caution, could quickly result in an explosion of candidate pages.

This approach works great for a lot of cases, e.g. when we want to match model numbers, names, codes, rare-words. Basically when the token of the query is a must, which is not possible to know before-hand. But it can also introduce a lot of noise as shown below, because we lack a semantic understanding of the query.

query soldier of fortune game
document 1 ps2 game soldier fortune
document 2 soldier of fortune games
document 3 (Noise) fortune games
document 4 (Noise) soldier games

An alternative approach to recall is to map these queries to a vector space and match in this higher dimensional space. Each query gets represented by a K dimensional vector, 200 in our case, and we find the nearest neighbors to the user query in this vector space.

This approach has an advantage with the fact that it can match semantically. But it could also introduce noise with aggressive semantic matching, as illustrated below. This technique could also be unreliable when the query contains rare words, like model numbers or names, as their vector space neighbors could be random.

query soldier of fortune game
document 1 soldier of fortune android
document 2 sof play
document 3 soldier of fortune playstation
document 4 (Noise) defence of wealth game

We train these query vectors based on some custom word embeddings learnt from our data. These are trained on billions of <good query, bad query> pairs collected from our data and use the Byte-Pair-Encoding implementation of SentencePiece[7] to address the issue with missing words in our vocabulary.

We then build an index with billions of these query vectors using Granne[8], our in-house solution for memory efficient vector matching. We will have a post on the internals of Granne soon on this blog, you should definitely look out for it if the topic is of interest.

We also generate synthetic queries out of titles, descriptions, words in the URL and the actual content of the page. These are, by definition, noisier that the query-logs captured by HumanWeb. But they are needed; otherwise, newer pages with fresh content would not be retrievable.

Which model do we prefer? The need for semantics is highly dependent on the context of the query which is unfortunately difficult to know a priori. Consider this example:

Scenario 1 how tall are people in stockholm? how tall are people in sweden?
Scenario 2 restaurants in stockholm restaurants in sweden

As you can see, the same semantic matching could result in good or bad matches based on the query. The semantic matching in Scenario 1 is useful for us to retrieve good results, but the matching in Scenario 2 could provide irrelevant results. Both keyword and vector based models have their strength and weaknesses. Combining them in an ensemble, together with multiple variations of those models, gives us much better results than any particular model in isolation. As one would expect, there is no silver bullet model or algorithm that does the trick.

The recall stage combines the results from both the keyword and vector-based indices. It then scores them with some basic heuristics to narrow down our set of candidate pages. Given the strict latency requirements, the recall stage is designed to be as quick as possible while providing acceptable recall numbers.

2.2 Precision Stage

The top pages from the recall stage enter the precision stage. Now that we are dealing with a smaller set of pages, we can subject them to additional scrutiny. Though the earlier versions of Cliqz search used a heuristic driven approach for this, we now use gradient boosted decision trees[9] trained on hundreds of heuristic and machine learned features. These are extracted from the query itself, the content of the page and the features provided by Human Web. The trees are trained on manually rated search results from our search quality team.

3. Filter Stage

The pages that survive the precision stage are now subject to more checks, which we call filters. Some of these are:

  • Deduplication Filter: This filter improves diversity of results by removing pages that have duplicated or similar content.
  • Language Filter: This filter removes pages which do not match the user’s language or our acceptable languages list.
  • Adult Filter: This filter is used to control the visibility of pages with adult content.
  • Platform Filter: This filter replaces links with platform appropriate ones e.g. mobile users would see the mobile version of the web page, if available.
  • Status Code Filter: This filter removes obsolete pages, i.e. links we cannot resolve anymore.

4. Snippet Generation Stage

Once the result set is finalized, we enhance the links to provide well formatted and query-sensitive snippets. We also extract any relevant structured data the page may contain to enrich its snippet.

The final result set is returned back to the user through our API gateway to the various frontend clients.

Maintaining Multiple Mutable Indices

The section on recall presented an extremely basic version of the index for the sake of clarity. But in reality, we have multiple versions of the index running in various configurations using different technologies. We use Keyvi[10], Granne, RocksDB and Cassandra in production to store different parts of our index based on their mutability, latency and compression constraints.

The total size of our index currently is around 50 TB. If you could find a server with the required disk space along with sufficient RAM, it is possible to run our search on localhost, completely disconnected from the internet. It doesn’t get more independent than that.

Search Quality

Search quality measurement plays an integral part in how we test and build our search. Apart from automated sanity checking of results against our competitors, we also have a dedicated team working on manually rating our results. Over the years, we have collected the ratings for millions of results. These ratings are used not only to test the system but to help train the ranking algorithms we mentioned before.


The query logs we collect from Human Web are unfortunately insufficient to build a good quality search. The actual content of the page is not only necessary to perform better ranking, it is required to be able to provide a richer user experience. Enriching the result with titles, snippets, geolocation and images helps the user make an informed decision about visiting a page.

It may seem like Common Crawl would suffice for this purpose, but it has poor coverage outside of the US and its update frequency is not realistic for use in a search engine.

While we do not crawl the web in the traditional sense, we maintain a distributed fetching infrastructure spread across multiple countries. Apart from fetching the pages in our index at periodic intervals, it is designed to respect politeness constraints and robots.txt while also dealing with blocklists and crawler unfriendly websites.

We still have issues getting our fetcher cliqzbot whitelisted on some very popular domains, like Facebook, Instagram, LinkedIn, GitHub and Bloomberg. If you can help in any way, please do reach out at beta[at]cliqz[dot]com. You’d be improving our search a lot!

Tech Stack

  • We maintain a hybrid deployment of services implemented primarily in Python, Rust, C and Golang.
  • Keyvi is our main index store. We built it to be space efficient, fast but also provide various approximate matching capabilities with its FST data structure.
  • The mutable part of our indices are maintained on Cassandra and RocksDB.
  • We built Granne to handle our vector based matching needs. It is a lot more memory efficient than other solutions we could find – you can read more about it in tommorrow’s blog post.
  • We use qpick[11] for our keyword matching requirements. It is built to be both memory and space efficient, while being able to scale to billions of queries.

Granne and qpick have been open-sourced under MIT and GPL-2.0 licenses respectively, do check them out!

It is hard to summarize the infrastructural challenges in running a search engine in a small section of this post. We will have dedicated blog posts detailing our Kubernetes infrastructure which enables all the above soon, so do check them out!

While some of our early decisions allowed us to drastically reduce the effort required in setting up a web scale search engine, it should be clear by now that there are still a lot of moving parts which must work in unison to enable the final experience. The devil is in the details for each of these steps – topics like language detection, adult content filtering, handling redirects or multilingual word embeddings are challenging at web scale. We will have posts on more of the internals soon, but we would love to hear your thoughts on our approach. Feel free to share and discuss this post, we will try our best to answer your questions!


  1. Alphabet Q3 2019 earnings release – report ↩︎

  2. Statista search engine market share 2019 – link ↩︎

  3. Web Crawler – Wikipedia – link ↩︎

  4. Spamdexing – Wikipedia – link ↩︎

  5. Deep Web – Wikipedia – link ↩︎

  6. Web Search Query Log Downloads – link ↩︎

  7. SentencePiece – GitHub – link ↩︎

  8. Granne – GitHub – link ↩︎

  9. Gradient Boosting – Wikipedia – link ↩︎

  10. Keyvi – GitHub – link ↩︎

  11. qpick – GitHub – link ↩︎


Drag and drop, in the context of a web app, gives people a visual way to pick up and move elements just like we would in the real world. This bit of skeuomorphism makes UIs with drag and drop interactions intuitive to use.

Skeuomorphism is where an object in software mimics its real world counterpart. – IDF

Whilst there may not be an obvious real-world counterpart to dragging digital cards around an interactive Kanban list, such as that of Trello, the action itself is familiar to humans, so it’s easy to learn.

Trello Drag and Drop

Modern skeuomorphism is the bridge at the intersection of digital and industrial design. It is about facilitating non-traditional device interaction without sacrificing usability. It is about enriching and enlivening real world objects in the context of our human physiology.

Justin Baker

Despite these interactions becoming a common feature in a wide range of tools on the web – from Kanban boards like Trello, to actual email inboxes like Gmail, they’re pretty hard to actually build. I found this out when making my own, even with the use of open source libraries. Here’s what it looks like at the moment:

Drag and Drop in Letter

In this article, I’ll cover:

  • Design considerations: Accessibility and more
  • React libraries: A couple options for building a drag and drop UI
  • Making it responsive: Drag and drop on mobile

Before looking into different libraries, and the technical side of implementing a drag and drop, I’d recommend starting with the design considerations. What makes a good drag and drop? Here’s a few things:


Accessibility can be challenging for a drag and drop, with traditional drag and drop libraries skipping past it:

Traditionally drag and drop interactions have been exclusively a mouse or touch interaction.

Alex Reardon

If a user can’t physically drag and drop using their method of interaction, how can you make it easier? Keyboard interactions are a good option, e.g.:

  • Press spacebar to pick up
  • Use arrow keys to move the selected element
  • Use space again to drop

Without prior knowledge in this area, this can sound like something daunting to implement. A good place to start is by looking into ARIA Live Regions, which help you communicate operation, identity, and state during a drag and drop interaction. For more depth, read Jesse Hausler‘s (Principal Accessibility Specialist at Salesforce) article, ‘4 Major Patterns for Accessible Drag and Drop‘.

Drag Handle

The drag handle is the area of the draggable element that you click or touch to pick up and move an item. It also can be called the draggable area:

Small drag handle in Letter

The example above shows small handle, which can work for somebody on a desktop device with a mouse cursor, but may be tricky for a chubby finger on a small touch screen. In that case you may want to make some changes.

You can also question using a drag handle at all. Whilst helpful in some contexts, in others, it might not actually be necessary:

For components that don’t typically involve drag and drop, a drag handle helps users discover drag and drop as an available action. But in cases where drag and drop is always an expected behavior, it isn’t necessary.

Grace Noh, on Marvel blog

Notion Example

This is evident in Notion. For their regular list views, a really small drag handle is present, but not for the Kanban view. In that situation, the entire card becomes the draggable surface:

Drag handle (left), no drag handle in kanban (right)

Drag and Drop States

How does your user know when a drag has begun? Use of states can be a good indicator. Some states to consider are:

  • On Hover
  • On Press (or click)
  • On Dragging
  • On Dropping
On Press (left), On Drag (right)

These may sound obvious, but it also depends on the input device. For example, there’s not really a hover state on a touch device.

Getting these right for the purpose of your app is important, and Grace Noh goes into much more depth on this in her article, Drag and Drop for Design Systems. In it, she covers system cursors, state styles, affordances, and more:

Luckily, people like Alex Reordon have also taken a lot of the above into consideration when building open source drag and drop libraries that we can all make use of. Check out his post, Rethinking Drag and Drop, for more. In the next section I’ll show you how I found working with Alex’s React Beautiful DnD, and actually why I didn’t use it in the end:

With there being ready-made drag and drop libraries available, creating your own from scratch may be something you want to avoid. However, it’s important to note there are limitations on open source libraries that lead to them not working for your design. In that case, it could actually be faster to start from scratch rather than try and build workarounds.

For example, the design tool Marvel had a unique use case, whereby components are dragged vertically between ‘Sections‘, and also horizontally:

Vertical Dragging between Sections
Horizontal dragging

The specific needs in a UI like this could make it a better option to rethink drag and drop yourself. I found that out for myself as you’ll see below.

Now you’re aware there are limitations, I’ll go through the libraries I tried out. These are only React libraries, as Letter is made with React:

React Beautiful DnD

This is one of the most widely used libraries, check out those GitHub stats:


The benefits of this library are huge – it has you covered for a wide range of design scenarios that are outlined in the previous section. It’s accessibility support in particular sets it apart from the rest in my opinion:

It’s also pretty limitless when it comes to input device support. Look, a thought sensor ?:

thought sensor
Drag and drop with the brain ?, by Charlie Gerard

…or what about a webcam sensor:

webcam sensor
Webcam sensor drag and drop, by Kang-Wei Chan

See these examples, and more here.


I thought it had everything, but there was a couple limitations that meant I couldn’t use this library:

  • You can’t have nested containers, meaning 2 drag and drop columns cannot overlap. This made my popout menu impossible with the library.
  • There are no placeholders during a drag (see this in the next section)

If those drawbacks don’t affect you, try it out here. And there’s an entire video guide that will take you through implementing it step by step here:

I hope to switch back to React-Beautiful when the nested scroll container issue is fixed – they’re currently working on it.

React Smooth DnD

Not as cool as React-Beautiful, but this is the one I have gone with for now. This one is by Kutlu Sahin, and has great advantages of it’s own:


  • Customisable drop placeholder – see the blue dotted line above. This has been a great improvement for the overall experience, as there’s no doubt on where the component will land.
  • Slower dragging speed – React-beautiful-dnd was almost too sensitive, making it tricky to drop components in the correct place in my implementation. This library is feels more controlled and slower, which works better for large components.
  • No limitations on overlapping containers – this solves the issue I had with React-beautiful-dnd


There’s no mention of keyboard support or accessibility considerations with React-Smooth-DnD. It’s common knowledge now that we should be building accessibility-first, and I outlined it in the earlier section. Whilst in theory it’s easy and obvious, the reality of working with very limited time and resources means that making a drag and drop feature accessible is difficult to handle alone. I have to rely on open-source libraries for this.

One more I tried out was React-DnD. This was the first one I ever used, when Letter app was called Email Otter. I found it quite hard to use when adding animations, so switched to React-beautiful-dnd:

Libraries can be interchangeable

Overall, it’s probably fair to say that I’ve gone through drag and drop hell, and back. I also typed ‘drag and drop’ 30 times so far in this article ??‍♂️ .

One thing that was a saviour is that all these drag and drop (31) libraries work relatively similarly, so it’s not impossible to interchange between them. Both React-beautiful-dnd and React-smooth-dnd are based on similar concepts – drag containers and droppable items, making the react components similar to implement from a high level.

In hindsight, before baking these into your app, it’s definitely worth playing with them in CodeSandbox. A quick search can give you editable demos of React-beautiful-dnd, React-smooth-dnd, and anything else. Investigate the data structure that’s required by whichever library you use, and make sure your app structure is flexible to switch.

You never know what size screen or device type someone will use when trying your app. If they use a mobile device, there’s a few more things to consider:

  • Drag Handle Size: On touch devices, should there be drag handles? Is the drag handle usable?
  • Touch Input: Touch is different from a mouse click. There are different events and states, e.g. touch vs hover.
  • Drag Surface: Maybe the entire component should be a draggable surface. If so, should there a button to activate drag mode? When the component is touched to drag, we need to make sure text isn’t highlighted by mistake.

Of the 3 above, I’ve found that touch input was one of the more challenging aspects. When a user touches the draggable component, and then swipes to move it, the entire page scrolls with it. Therefore, the native touch scroll has to be disabled during the dragging of the component. To do this, React has a list of synthetic events for us to tap into (pun may not be intended):

By using onTouchStart to detect when a user presses a draggable element, scroll can be disabled at the right time.

Touch Duration

But that’s not all! How do you know if a user want to scroll, or pick up a component? For that, detecting the touch duration can be used:

  • Activate on hold: If the user touches and holds the same place on a draggable component, activate the drag.
  • Ignore on swipe: If the touch duration is low, such as that of a swipe, we can guess that the user is scrolling.

My next steps

Now I’ve got my drag and drop working on desktop (inside a popout menu bar – which was a whole other issue), I’m improving the experience of my editor on smaller screens.

One idea I’m exploring is making the drag and drop menu hide-able, so the entire space can be used for editing. Here’s a Marvel Prototype of that:

Thanks for reading, if you have any questions or want me to go into more detail, drop a comment. Plus, find out more about the Email design tool I’m building here.

Build an entire newsletter with your thumb ?

? Just paste a link to import entire articles – key for mobile

⌨️ It’s a WYSIWYG editor, so natural to type edits

✨ Export to @CodePen ..send email is coming up next

Making a Spotify letter on my iPhone 5 ?? with @Letter_HQ ?

— Graeme Fulton (@graeme_fulton) November 11, 2019

A big thank you to Alex Reordon, Kutlu Sahin, and all the open sorcerers who made such great libraries.

Being a designer is a constant balancing act. Whether it’s finding a balance between users’ needs and an organisation’s needs, or forgoing a better design to deliver something quickly, we’re always making trade offs.

One of the trade offs I’ve been thinking about lately has been the one between building trust with your users and fostering positive relationships with your teammates.

In an ideal world, we’d all be fighting the good fight and prioritising user needs at all costs, regardless of objections from others, or logistical constraints like time and money.

But in reality, it’s very hard to advocate for users if you’ve lost the trust of your team.

To do our work effectively, we need to build trust in both directions.

Being a developer working with designers

I began my career as a frontend developer. Designers would hand me mock-ups and I’d code them up. The designers I worked with were passionate about getting things to look right.

I took pride in making interfaces cross-browser, accessible and ‘pixel perfect’.

But I also learned that websites don’t need to look the same in every browser and that users don’t really care about pixels—not beyond being able to use and trust your product.

I tried to influence my team to consider user needs. But the designers I used to work with wanted absolute adherence to their designs.

They also valued the ability to code past constraints—embracing constraints and pushing back didn’t elicit the best response—and the conversation would quickly break down.

Putting users first

I was left frustrated, and hoped to find work that would let me prioritise user experience and simplicity.

And thankfully I managed to do that when I joined Just Eat. I worked closely with a designer called Mark, who shared my ethos, advocating for simplicity over pixel-perfect design.

It was possible to put users first and I was happy doing that.

Facing constraints

I’ve always faced constraints in some form, both as a designer and a developer.

One story I have involved cutting the scope of our MVP by 50% to launch Kidly, an online store for baby products.

But more recently, working in government, I’ve had to deal with different kinds of constraints that I hadn’t encountered in the private sector.

Budget restrictions are a different ball game when you’re talking about taxpayers’ money, and policy and legislative requirements mean you can’t always take the most user-centred approach.

I met some amazing designers, doing the best work they could within the parameters they faced, and I began to empathise with the need to accept constraints and respect the limitations of those around me.

Designing the impossible

Some years later, I came across Craig Abbott’s article, designing the impossible.

Craig says it’s our job as designers to push for better, a lot better. Not to bow down to pressure when developers push back on our designs because of [insert technical constraint here].

And it’s true. If you embrace every constraint that comes your way, you’ll only ever design a subpar experience.

But, pushing for the impossible isn’t always conducive to building trust with your colleagues. And it got me thinking some more.

Finding a balance

We gain trust with teammates by being valuable and practical. By being a team player. It’s a difficult balancing act when you want to help your colleagues do the basic thing but also push to make things better.

I’ve been to many backlog refinement sessions, where my work has been scaled back to deliver faster.

And I’ve been okay with that. Not just because I want to deliver faster, but I want to build trust by taking my ego out of the equation — something that I learnt from Mark.

But I sometimes wonder if I should have fought harder to make sure our users get a better experience.

Is it my job to be realistic and empathetic to constraints, or to be the persistent voice of the user who makes stuff better at the cost of momentum and team morale?

As with most things, it depends.

The length of time spent on a team, your team’s size and capacity, and the deadlines you’re facing, all factor into the equation.

I’ve found both approaches are valid, and which I choose depends on the situation at hand.

Building trust

Being a designer is full of challenges and tradeoffs. But that’s why it’s a job. That’s why we call it work.

We have to learn to push for the impossible while navigating and respecting the constraints of the people and organisations we work with.

Working out when to push our products to work harder for our users, or let go and accept a bit less is a skill. But it’s a skill worth honing, and one I’m still continuing to learn.

Push too hard and things fall apart. But avoid conflict and we may as well not be there.

Thankfully trust can be built up over time. And it’s reciprocal. So when you give it, you tend to get some back.

By working to give trust and to earn it back, with both our users and our teammates, we create the space we need to do our best work.

Thanks to Amy Hupe for turning my messy thoughts into something coherent.

I write articles like this and share them with my private mailing list. No spam and definitely no popups. Just one article a month, straight to your inbox. Sign up below:


Senior Director of User Experience Karla Ortloff heads up global UX for legal professionals at Thomson Reuters. In July 2019, 30 members of her Minneapolis-based team participated in an in-house Facilitating Design Thinking course led by Cooper Professional Education. For our Client Spotlight series, the director shares strategies that have proven effective in fostering collaboration, incentivizing growth, and driving the design culture within a 90-person team at Thomson Reuters.

(Editor’s note: this interview has been edited and condensed for clarity.)

What brought you to Thomson Reuters?

I’ve been in the industry for over 24 years and have worked at many different companies as a consultant and employee. Before I came to Thomson Reuters 11 years ago, I spent a significant portion of my career at Best Buy, so I had online retail in my blood. However, this was a great opportunity to be a part of a global organization that hadn’t yet built up a robust UX function.

It was quite a change going from building retail sites to building web-based applications for legal professionals. I’ve loved every minute of it, mostly because I just love solving problems. It’s a complex market with unique problems, so we’re always pushing our UX skills. It’s super challenging.

Can you describe the makeup of your team?

My team is responsible for designing cloud-based applications for the global legal professional market. When I took over, we were fewer than a dozen people. We’ve had a lot of highs, lows, and growing pains along the way, but now we’re 90-plus people and I’m projecting more growth in the future. Our multidisciplinary team includes user experience designers, user interface designers, accessibility specialists, front-end developers, user research, and project management. The majority of our team is based out of Minneapolis, but we also have remote workers across the U.S., UK, Brazil, and Belarus.

Could you share some of your strategies for incentivizing growth and encouraging collaboration?
  • In order for us to improve our team’s perceived value and influence and improve collaboration with our stakeholders, we turned our research skills inward to find out what our internal stakeholders think about working with us. We interviewed our product managers and asked them questions like “What do you like about working on with us? Where have we been beneficial? What is your impression of us? Where can we improve?” One of the findings we found surprising is that we sometimes come across as being “above” them. I think in our eagerness to prove value and get a seat at the table we’re perceived as having an ego instead of having confidence. That was something we made a conscious effort to improve.
  • I can’t emphasize enough how important it is to celebrate every small or big UX win. We have something called #UXWins, where we share even little wins people have had.
  • Every week we have “thanks a latte,” where people can nominate each other for doing something effective or helpful, and they get a coffee card. A culture of affirmation and appreciation is really important.
  • A lot of the things on our “UX Bingo” card reflect the culture we have in our organization. It’s not only dealing with professional development, it’s creating a culture of appreciation, a collaborative culture. One of them was simply “have coffee with a coworker.”
  • For “UX Coffee,” each month team members are randomly set up with another team member for a coffee date, either remote or in person, just to get to know each other. You can connect with quite a few people each year on your team by doing that.
In what ways has
your UX team matured over the years?

When we started out, we were a small team working on one product. That work contributed to the most successful product launch in our company’s history. I saw a huge opportunity to build on our win as a way to expand our learnings and best practices into other products across our organization.

In order for us to improve our team’s perceived value and influence and improve collaboration with our stakeholders, we turned our research skills inward.

By 2013 we were expanding our portfolio by taking on tactical and maintenance work, and each year we’ve been progressing our maturity. By 2016, team members were asking, “Why aren’t we doing strategy?” However, to be truly successful the team needed to be ready to do it, and the organization needed to trust us to do it. Finding that moment where the two intersect was critical and took extreme patience. Today, 30% of our project portfolio is strategic or discovery initiatives, and we’re struggling to keep up with the demand.

Our focus now is on maturing our UX operations. For years were striving to be in that strategic area of maturity where the designer is an integral part of influencing and making decisions on the product, and we’re there (mostly). I’ve now added a fourth level to our maturity model called “Goodwill” — where we’re identifying, initiating, and owning projects that solve problems beyond our traditional project portfolio. Someone needs to do it, and we’ve got the skills. It’s a great place for a UX team to be.

Where does
professional development fit in to elevating your team in that maturity model?

Professional development is critical as an organization grows and matures. We’ve had to develop our own career pathing. The traditional models in place assumed people were going to be here for decades, and we needed to provide jumps every couple of years, rewarding people for sticking around, because it’s a competitive market.

Cooper Professional Education Facilitating Design Thinking training for Thompson Reuters

We do a lot of internal team skill-sharing sessions and recently created training “ambassadors.” We also attend conferences and bring in professional training like Cooper Professional Education.

Professional development is critical as an organization grows and matures.

Since we’re growing, we need to get team members onboarded faster. We have a robust online resource and an orientation training site to get people up to speed on the training, team culture, projects, product portfolios, and our customer segments and personas. Our orientation includes one-on-one meetings with leaders across the team to talk about their areas of responsibility, current projects, pain points, customer needs, and sharing stories of their personal and professional life. It’s important to set the tone from day one that we have to have empathy for our customers. We also practice having empathy for each other.

Where do you see your team making the biggest strides; what’s working well?

We’re looking for ways to solve customer problems faster. Recently, the company brought in the function of Agility Coaches to accelerate the development process. We’ve been doing a form of Agile for over a decade, but over time best practices fade away and pockets of teams start to form their own version of Agile. Now we’re going back to basics to focus on the best of Agile, Lean, Dev Ops, Google Sprints, etc. We’re getting everyone — Development, Product Management, and UX — back in a room for a fixed amount of time. The best idea wins no matter who’s there. This is where we impress and build trust in ways that quick standups and design reviews can’t necessarily accomplish. We have the tools, skills and experience to take the lead, facilitate design thinking conversations, and provide quick concepts and ideas that can gather customer feedback in a very short amount of time.

What are your
goals moving forward in building a design culture?

When we’re building products, we say a product is never “done,” and the same holds true for building a design culture. I’ve been working on some of the external blockers in the company that have impeded our maturity. It doesn’t matter how great your team culture is if other parts of your organization aren’t as mature. My goal is to solve the problems that prevent us from solving problems.

Cooper Professional Education Facilitating Design Thinking training for Thompson Reuters UX Team for Legal Professionals

The first initiative was to introduce our Product Managers to new ways of thinking about their roles and responsibilities and find a consistent way of working across our global organization. This isn’t something a UX team would normally take on, but it was an obvious blocker preventing our team from having more ownership over our work and making it difficult to establish best practices. I proved Product Management teams had the same concerns and frustrations, which resulted in getting executive buy-in to fund training across the organization. So far, we’ve sent more than 150 Product Managers through the certification training, and their feedback has been overwhelmingly positive. They get career development training, and my team wins when we’re in meetings and Product Managers make comments like, “Oh normally I would worry about the design, but I’ve been told we’re not supposed to be focused on that.” Win-win!

When we’re building products, we say a product is never “done,” and the same holds true for building a design culture.

Another initiative was to find a way to make our team more efficient so we’re spending less time documenting and debating with stakeholders and more time focusing on the bigger-picture customer problems. A design system is one of those things every company on the maturity journey needs to address, so we’ve invested a lot of time and energy into creating a robust, accessible design system.

Finally, we’ve been improving accessibility and changing people’s minds about what it means to build an “accessible” product. We needed to address the misconception that it’s expensive and an innovation killer and prove that it’s a product differentiator as well as the right thing to do. We’ve initiated and created corporate-wide training for groups including Product Managers, Developers, Content Creators, Marketing, and more. Within the UX team, Accessibility Ambassadors champion training and make sure we’re designing accessible solutions. They also track the quality of the work as it makes its way through the development cycle.

We’ve been partnering with groups across the company, including Diversity and Inclusion and Corporate Legal Counsel, to establish standards and policies. It’s been a grassroots effort that has seen huge benefits and adoption in a short amount of time. It’s all about looking for non-traditional UX partners that may have an interest in what you are doing and can help you drive change.

Interested in bringing Cooper Professional Education coaches in-house like Thomson Reuters did? Read about corporate training here, and reach out to learn more!

Find out how Cooper Professional Education can help your organization become more creative, human-centered, and impactful on our corporate training page.


Adobe is developing live-streaming features that are built directly into its Creative Cloud apps, the company announced at its annual Adobe Max creativity conference. A beta version of the feature is currently available to a whitelisted group of users on Adobe Fresco. The feature gives users the option to go live and share a link for anyone online to watch and comment on their streams.

Chief product officer Scott Belsky compared the experience to Twitch but with an educational component that could filter videos for users who want to learn how to use specific tools.

“When you see a live stream of someone in our products, you want to know what tool they’re using — when they use the tool and when they stop using it — almost like a form of the waveform of video,” Belsky told The Verge. “But imagine a waveform related to what tools people are using, and imagine being able to source all live streams that have ever been done in a particular product, by a particular tool, to be able to learn how people are doing something.”

Adobe currently features artists on Adobe Live, a live stream that’s available on Behance and YouTube for viewers online to watch artists at work. Live streams can often run as long as three hours, and the company says the average watch time on any video on Adobe Live is over 66 minutes. Some streams also show a tool timeline, seen above, that tracks which tools were used throughout an artist’s workflow.

Adobe’s live-streaming feature aims to be more useful than just watching a video on YouTube. “Designers say they learned by sitting next to designers, not by going to design school as much. We just need to enable that on a massive scale,” Belsky says. “It also makes our products viral.”


Web design is loaded with existential questions. One of the biggest being: Can I build a website today that will still be relevant (in both style and function) tomorrow?

The answer probably depends on how many tomorrows into the future you’re referring to. But a good rule of thumb is that, the more time that passes, the less relevant a website’s design and functionality become. The future always brings change – often in ways we don’t anticipate.

This is probably a good thing, as it keeps us busy with redesign work. But if we’re refactoring an existing site, that can be a real challenge.

The key to taking on that challenge is in designing and building websites that keep an eye towards the future. Below are a few tips for doing just that.

The Web Designer Toolbox

Unlimited Downloads: 1,000,000 Web Templates, Themes, Plugins, Design Assets, and much more!

Use Established Systems

Content Management Systems (CMS) have come to dominate the landscape. And while we all know the big players such as WordPress and Drupal, there are untold amounts of competitors. That doesn’t even take into account the plethora of DIY site builder services out there as well.

While many of the up-and-coming systems sound compelling, there is a serious question regarding their potential for longevity. Simply put: They may or may not be around in a few years. This isn’t even a question of quality. The reality is that it’s an uphill battle and there are bound to be some casualties along the way.

For your smaller projects, this may not be a deal-breaker. But for larger websites, stability is key. Having to move to a new CMS because your current platform is languishing (or worse) is a major task.

That’s why, before you craft a design or write a single line of code, choosing a CMS is the single biggest decision you’ll make. Choose wisely.

And, once you have chosen the perfect CMS, you’ll want think long and hard about any plugins you intend to use. This is especially important when those plugins will power core functionality, such as eCommerce, member management, etc. Again, the goal is to avoid the major disruption of having to switch later on.

WordPress Plugin ScreenFlexbox, for example, offers multicolumn layouts that can stretch to match the tallest column of the group. And CSS Grid can be tweaked into nearly endless complex layouts with just a bit of code.

Navigation is another area that seems to always overrun its initial intent. We can prepare for this by following the trends, such as placing at least some items behind the good old hamburger menu. This allows for growth and doesn’t necessarily require any radical design changes.

Most of all, look for solutions that are both creative and practical. This will help you avoid running into a self-made design wall.


Just as content needs change, so do functionality requirements. Therefore, it’s probably worth both anticipating and accepting that the code we write today is going to change at some point.

Depending on the language you’re using and your experience level, writing code that allows for future tweaks can be a real challenge. Sometimes, just getting it to work for the most immediate need takes all of our brainpower.

Plus, there are any number of ways to accomplish the same result. This, however, is a good thing. Once you have achieved your initial functionality goal, you have the opportunity to take a second look.

From there, think about ways to streamline what you’ve done and look at how easy it will be to extend later on. Ask yourself how you can make your code as efficient as possible. Taking those steps now could prevent a future mess.

A man writing code.right questions. This can be very helpful when it comes to spotting areas of a project that could expand over time.

For instance, let’s say that a client tells you that they are looking for a simple eCommerce site (which doesn’t exist, by the way). This is an area primed for growth.

New products and features will most likely be added at some point. Understanding this, you can design and build in anticipation of the possibility. One example might be implementing a shopping cart that can be easily extended to do a multitude of things, rather than one with a narrow focus.

A man working at a desk.


Housewares and brand response television have a long and robust shared history. Over 30 years ago, the Federal Communications Commission’s (FCC) deregulated television air time allowing different time formats of commercial air time to be purchased. This paved the way for the live shopping channels as we know them today, showcasing many housewares products. It also opened up the airwaves to longer formats like 30-minute infomercials. Braun and Black Decker were among the first major housewares retail brands to embrace the format, an early example of direct response television (DRTV), in which consumers are encouraged to buy directly from advertisements.

Throughout the late 1980s and early 1990s, longer format advertising represented about 75% of the DRTV landscape and today the industry is worth over $200 billion. Popular housewares DRTV advertisers achieved an almost cult-like status by using a variety of lengths including from 30 seconds to a half hour and live shopping to promote products like the George Foreman Grill, OxiClean, ShamWow and of course, the Snuggie. However, it is brands like Conair, Cuisinart, Dollar Shave, KitchenAid, Pfizer, Shark, Wahl Clipper Corporation and WORX that have paved the way for advertising that utilizes both branding and response mechanisms or brand response advertising.

As television and media consumption habits have evolved, along with advertising and marketing technology, so has DRTV, paving the way for the next generation of brand response TV. The role of DRTV is expanding in the brand marketing world and the next generation of DRTV has opened up powerful opportunities for housewares brands seeking accountability and faster campaign ROI. As the role of DRTV expands in the brand marketing world, it is time for marketers to take advantage of that expansion to solve some of their biggest advertising pain points.

In today’s competitive media landscape, brand advertisers struggle more than ever before to earn market share using traditional approaches due to factors like cost and fragmentation. Targeting consumers and B2B customers is getting extremely difficult. Put simply, it’s hard to stand out in an environment where people are bombarded by brand messages all day, on all sides. Social media may be widely used, but it also gives brands a split second to make an impression. What marketers need is a canvas that tells a story in an attention-getting medium, which is why they are turning to brand response TV (BRTV) and connected TV with the traditional bag of tricks like retargeting.

Brand response advertising increases brand awareness, improves brand perception and drives engagement. It is highly accountable, measurable, and delivers real ROI. It features customized, relevant content and works in conjunction with other types of media and channels. The brand response paradigm leverages a strong data component and can empower brands to make better decisions around media buys, messaging, and their overall campaigns. Marketers can analyze real-time performance statistics and test strategies in an ongoing way. BRTV is also more affordable and efficient than general advertising.

Despite these major benefits, many brand marketers are reluctant to invest in brand response TV due to concerns that people aren’t watching. Certainly video content is evolving, but that doesn’t mean TV is dead.

In an October 2018 study, the Consumer Technology Association surveyed 2,000 US adults about their content consumption habits. The survey yielded four main segments: Traditionalists, Value-Conscious Streamers, Device-Diverse Viewers and Experience Seekers. Traditionalists (29%) haven’t tried new technology and are less likely to stream or binge-watch; Value-Conscious Streams (41%) are more likely to use streaming services than cable and prioritize saving money; Device-Diverse Viewers (13%) watch a lot of video content from many sources on many devices; Experience Seekers (17%) prioritize an optimal viewing experience, and are willing to spend on technology and content to get that experience.

Across all these personas, one thing is clear – TV remains the top device for viewing content and cable/satellite remains the top source for content. TV is highly relevant and a long way from becoming obsolete. Housewares brands that invest in brand response TV can get serious bang for their buck.

Housewares brands are particularly suited to brand response advertising for a number of reasons. One is the power of demonstration. Brands can show their products in action and demonstrate how they will improve people’s lives in a way that is impactful and enticing. Consider Dyson, Keurig, Leesa, Rust-Oleum and T-Mobile. Viewers can immediately see how these products address a clear pain point and offer better ease and convenience than whatever they’re currently doing.

Housewares tend to be fairly intimate since they are products people use in their homes, so creating an emotional connection with viewers is key. This is why so many of the stars of DRTV are “everyman” or “everywoman” types who viewers feel comfortable with, recognize, and trust.  Housewares brands using brand response have a natural, high-focus on their relationship with consumers. The shift from brick and mortar to e-commerce has been beneficial to brands as they engage directly with the consumer via Amazon FBM (fulfillment by merchant), as well as interactions with consumers that share their housewares experience online.     

Further, using brand response for housewares creates more room to deploy creative strategies, as suggested by Dash, Sobro and Wahl Clipper Corporation executives at the aforementioned building housewares brands seminar. For example, Catherine-Gail Reinhard, vice president, product strategy & marketing for Dash and Sobro shared that she often creates recipes and/or develops cookbooks that complement a food-related houseware. At the same seminar, Steven Yde, Vice President Marketing NAC division, said that offering guides from in-house barbers helps build value and ensures greater product satisfaction. 

Direct response advertising has come a long way from the days of Ron Popeil’s first TV commercials for Ronco’s housewares gadgets like the Ronco Spray Gun, the Chop-O-Matic and the Veg-O-Matic, but clearly many aspects have remained the same.  In 2019 and beyond, brand response TV is a highly effective and cost-efficient approach for any housewares brand that wants to strategically grow and have a meaningful impact. 

Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.

About The Author


4 strategies to build products like a marathoner.

Prachi Nain

In the marathon, the first half is just a normal run. At 15 kilometers, 20 kilometers, everybody is still going to be there. Where the marathon starts is after 30 kilometers. That’s where you feel pain everywhere in your body. The muscles are really aching, and only the most prepared and well-organized athlete is going to do well after that.

— Eliud Kipchoge, marathon record holder with a time of 2:01:39 hours

Building a product is like running a marathon. Only the most prepared and well-organized keep going when the going gets tough. Let’s unpack some of the strategies that runners employ and use those to boost our product journey.

Runners focus on improving their personal bests. “My finish time is less than what I want. I’ll include more strength training and hill running to go faster.” In the process, they get faster than others but that’s not because they are trying to beat others. They are just trying to beat themselves.

If you are a bank and a rival bank launches a chatbot, you don’t have to follow suit.

Do what a runner would do — Reflect on where your product needs to improve.

For example, if bank customers are complaining about the unexpected charges, it’s a sign of distrust. Work on making all kinds of fee transparent and scrap it, where possible. If you are facing a 50% drop off while sign up, fix the form.

I know someone who decided to run a marathon in just 2 weeks. Her experience was excruciating and non-rewarding. That was the only race she ever ran. Runners realise that it takes planning and months of training to run a marathon. You just can’t wing it!

Random runs in product design include ‘jazzing’ things up, bringing in the ‘wow’ factor, basically anything that is more of a facade and doesn’t make any difference on how the customer uses the product.

Do what a runner would do — Plan for progress.

Instead of asking “how to improve the design of our product”, the right question is “how can our product help customers make the most progress?”

Carve out the most optimal linear path with milestones. At any point, they are aware of where they stand in their action plan and where they are headed. Design your product for the high-expectation customer to help them make the most progress.

Runners realise that they need to consistently push themselves a little bit harder each time they train. Push too little and you don’t progress. Push too much and you burn out. Interval training involves short bursts of high-intensity runs alternating with rest or low-intensity runs. It helps runners to work more in a shorter period of time. It’s more comfortable than an entire workout at a high intensity that leads to burn out.

I use a website to order dog food. It’s not the most efficient experience but it gets the job done. Few screens are redundant. Few inputs are confusing. They probably keep getting reviews from customers to improve their website. They end up going up for a design revamp every few months! There’s a new look and feel each time, which customers don’t care about. What matters is that the flow of finding the right item and placing the order changes each time. The new site brings new set of issues. It doesn’t always have to be an all out or nothing. Do what a runner would do — Try shorter bursts for faster results.

Instead of spending months on a complete design revamp, sometimes it’s wise to quickly fix the low-hanging fruits for instant results.

Fixing broken fields and links and getting rid of redundancies is a better deal for customers than a complete design revamp that brings its own set of problems each time.

I have been running for the past 5 years but I never ran with friends before. Recently, I joined a group of marathon pacers who are training themselves for upcoming runs. In case you don’t know, marathon pacers are the people in a race running with balloons marked with a finish time. They help other runners finish their run in specific times. Running in a group is so much easier and more fun.

Each group has runners of different age, sex, and experience. The one thing that binds them to a group is their pace. On their runs, one runner would be responsible for playing music, another keeps track of the route, another would make sure the team stays hydrated. Everyone has different roles but they are all aiming at the same finish time.

The entire journey from idea to market is full of hustle. The hustle becomes slightly easier and a lot more exciting if you are a team. A team with a mix of hard and soft skills — one person good at building, another good at selling. Someone with a lot of experience, another who’s a newbie. A team with varied skill sets but a common vision for the product.

A team with a common vision runs together, keeps each other pumped, and finishes strong. Whether you start as friends or not, you cross the finish line as friends.

Thanks for reading! I am Prachi, co-founder of Bayzil, a product strategy and design studio based in Singapore. We love to talk and hear about the latest and best in product strategy, design, and content. Would love to hear your thoughts in comments and claps ? if you liked what you read.