the-(unofficial)-apple-archive

Dedicated to the unsung studio designers, copywriters, producers, ADs, CDs, and everyone else who creates wonderful things.

Dedicated to those who stayed up late and got up early to get on the family iMac to recreate event slides in Keynote.

Thank you.

welcome-to-apple:-a-one-party-state

In June 2017, about 100 employees of Apple Inc gathered at the company’s headquarters at 1 Infinite Loop, Cupertino, to hear a presentation designed to scare them witless.

Staffed by former members of the National Security Agency and the US military, Apple’s global security team played video messages from top executives warning attendees never to leak information.

“This has become a big deal for Tim,” Greg Joswiak, Apple’s vice president of marketing, said at the time. “I have faith deep in my soul that if we hire smart people they’re gonna think about this, they’re gonna understand this, and ultimately they’re gonna do the right thing, and that’s to keep their mouth shut.”

A secretive culture – bordering on paranoia – was first fostered by Steve Jobs, the founder of Apple, and then by his successor Tim Cook, who took over in 2011.

Apple employees typically sign several non-disclosure agreements (NDAs) per year, use codenames to refer to projects, and are locked out of meetings if they fail to obtain the appropriate documentation, former workers told us.

“Secrecy is everything at Apple,” one ex staffer said. “Many employees don’t like Apple Park [the company’s new headquarters] because it has very few private offices. Confidentiality on projects and the ability to step behind a closed door is vital.”

Another recent ex-employee said that security was weaponised across the company, with internal blogs boasting about the number of employees caught leaking and NDAs required even for non-sensitive or mundane projects. The employee described how they were once asked to read a negative story about the company and then identify the Apple insider suspected of leaking information.

Since becoming chief executive, Cook has doubled down on security, catching 29 leakers in 2017 alone, according to an internal memo leaked to Bloomberg in 2018 (the company does not publicly disclose such figures).

Yet Cook has also radically shifted Apple’s priorities, sometimes in directions that his predecessor would not have understood or condoned. Understanding what has changed at the company in the 3,015 days since Jobs died of pancreatic cancer is arguably more critical to understanding Apple in 2020 than identifying what has remained the same.

Since 2011, Cook, a quietly spoken 59-year-old from Alabama, has built Apple into the largest tech company in the world, with a market valuation of more than $1 trillion. More than two thirds of that value was accumulated after Jobs’ death.

Steve Jobs (left) and Steve Wozniak in 1977, launching the Apple II computer

He has achieved such stellar growth partly through the sale of iterative updates to Apple’s flagship iPhone and the launch of new products such as the Apple Watch. Even though iPhones continue to drive more than half of Apple’s revenue, sales are sputtering as the smartphone market reaches maturity. So Cook is spearheading the company’s biggest shift in more than a decade: a switch away from making devices to providing services that touch almost every part of our lives.

From Apple TV to Apple Music, from Apple Pay to Apple News , Cook’s company is now the gateway through which millions of us live our lives. We watch movies, pay for groceries, read the news, go to the gym, adjust our heating and monitor our hearts through Apple services, which is now the company’s fastest growing division.

Living within this carefully curated ecosystem, soon to be bolstered by new augmented reality products, the company’s 1.4 billion active users have become less like customers and more like citizens. We no longer just live our lives on Apple’s phones, but in them.

Apple’s market valuation is roughly equal to the national net worth of Denmark, the 28th wealthiest country in the world. It has as many users as China has citizens. Its leader has a close relationship with the US president and other heads of state. In all but name, this is a superpower, wielding profound influence over our lives, our politics and our culture.

That’s why Tortoise has decided to report on Apple as if it is a country: the first instalment in a year-long project we are calling Tech Nations, which will cover all the main technology giants. Here, we’ll examine Apple’s economy, its foreign policy and its cultural affairs. We’ll dig into its leadership, its security operation and its lobbying spend. We’ll identify the executives likely to succeed Cook, and the areas where Apple is falling behind in the global tech race.

As Jobs might have put it, we’re trying to “think different” about the small computer company founded in a Los Altos garage back in 1976.


We have learned:

  • Apple is now unmistakably Tim Cook’s company. The 59-year-old has built an organisation radically different to the one left behind by its founding father, Steve Jobs.
  • Cook’s Apple resembles a liberal China. It is devoted to enabling individual creative expression, but on its terms: it has become a highly centralised, hierarchical and secretive state.
  • Cook’s Apple is defined by a corporate vision, rather than product innovation. Apple has a written constitution. Since Cook assumed power, he has fundamentally changed how Apple deals with suppliers, acts on the environment, engages in politics, produces and promotes cultural content, all while making privacy and security part of the brand.
  • Apple’s emphasis on privacy was dramatised by its refusal, in early 2016, to help the FBI unlock the iPhone of one of the San Bernardino terrorists. Back then, this put the company on a collision course with the state; prompting the question: who sets the rules? Now, after Facebook’s involvement in the Cambridge Analytica scandal, Apple is on the same side as lawmakers who increasingly want to deal with privacy breaches.
  • Apple’s move towards services is risky. Former employees, experts and external partners told us that the company’s focus on excellence was often not obvious in software-based products, such as Apple Music or Apple TV . The departure of Sir Jony Ive has given Eddy Cue, the head of Apple’s services division, and Jeff Williams, its chief operating officer, increased prominence.
  • To make Apple TV a “Netflix killer”, the firm is entering into “crazy” deals which have helped inflate the price paid to actors and directors, studio insiders said. Two executives told us that the stars of Apple’s flagship Apple TV show, The Morning Show, Jennifer Aniston and Reese Witherspoon, had been offered the entire rights back after around 10 years. Netflix, by contrast, keeps rights for life.
  • Autonomous cars and augmented reality will form a big part of Apple’s future. Apple patents we’ve seen envisage facial recognition data combined with in-car software identifying pedestrians by name – a development which could provoke privacy concerns.
  • Apple is falling behind in the race to harness artificial intelligence. Compared to Google and Facebook, Apple neither collects as much data as its competitors nor has the resources to exploit it as effectively. It is trying to change this picture by hiring top executives and buying up at least one AI company per year since 2014.

The People’s Republic of Cupertino

It may well be the most expensive metaphor ever built. In Cupertino, California, several sweaty miles away from Santa Cruz, is Apple Park, the headquarters of the tech company that has its logo on a billion iPhones. There is an actual park there, where employees can walk or cycle between the meticulous groves of apple trees and around a circular pool, although anyone else would be lucky to see it. It is surrounded by a great, glass, multi-storey ring of offices created by the architects Foster Partners with help from Apple’s outgoing chief design officer, Jony Ive. The ring is a mile in circumference. It cost $5 billion.

Apple can afford such architectural extravagance. It is the company that stood against the tide and turned it; helping us all to realise that computers could be more than just beige boxes, that headphones could be white, that telephones can do everything. Its decisions have defined our digital – and daily – lives.

The Steve Jobs Theater, at the heart of Apple Park

But what is Apple Park a metaphor for? High-minded Apple enthusiasts might say that the ring represents an endless cosmic loop. Or perhaps it is a planet-scale equivalent of the circular home button on earlier models of iPhone. Visiting aliens can click it from space – and go home.

The truth, however, is that it represents what Apple has become: a secret garden with tremendously high walls. Most people who try to peer over the edge are summarily pushed back. Apple is a part of the world but also apart from it. It is Maoism for individualists.

The development of the Macintosh computer, released in 1984, is a revealing origin story for Apple. Jobs had assembled a crew of “pirates” to build a computer as he wanted it, which meant attractive design, a symbiosis between hardware and software, and, most of all, control – of the consumer, by him.

In a hundred small ways, he made the Mac immutable and inescapable. Its elegant contours were actually hard borders, held together by special screws so that bedroom hobbyists couldn’t get inside with their regular screwdrivers. Requests to license out the operating system (so that it could be used on other computers) were refused or ignored. The Mac would be an ecosystem unto itself. People would have to buy into it entirely, or not at all.

Jobs was forced out of Apple for his hubris; then reinstalled in 1997, when the company was on the verge of bankruptcy. With Jony Ive at his side, and until his death from pancreatic cancer in 2011, he introduced a series of products that were like the original Mac in spirit yet incomparably more successful: the iMac, iPod, iPhone and iPad.

Jony Ive (left) and Tim Cook inspect the iPhone XR during an event at the Steve Jobs Theater in September 2018

Against that record, it is easy to dismiss Jobs’ successor, Tim Cook, as a button-down bureaucrat. Whereas Jobs’ Apple was about an idea – Think Different – Cook’s, his critics say, is more about a number, a market valuation of $1 trillion or more. Those critics also argue that the new Apple is less innovative as a result. They point at the Apple Pencil, a stylus introduced in 2015 to supplement the iPad, and set it against one of Jobs’s typically pugnacious speeches from 2007. “Who wants a stylus?” Jobs asked then. “Nobody wants a stylus.”

Yet Cook has made some defining interventions. Other companies, such as Facebook and Google, are happy for a sort of chaos to prevail: an online world that’s sprawling, messy and mostly unregulated, where data can be plucked from the air and passed on to advertisers. Cook is trying to create a refuge: a unified world of hardware, software and services, all under Apple’s flag, where citizens can expect their data to remain their own.

Two of the company’s most significant recent releases are Apple Arcade, a subscription gaming service for iOS devices, and Apple TV , a Netflix competitor. Executives such as Eddy Cue and Jennifer Bailey, both of whom work on the services side of the company, are now regarded to be as influential as the departing Ive once was. Much like China, Apple is shifting from being a manufacturing economy to a service-based one.

Jennifer Bailey, one of the people leading Apple into the realm of services

At the same time, Cook is doubling down on privacy and security as a differentiator from his competitors. That shift was most obvious in 2015, when the company refused to assist the FBI in unlocking the iPhone of one of the San Bernardino terrorists. It is clear, too, in the company’s latest advertisements, which are created across an in-house team and a dedicated set of people at the external agency TBWA. “These are private things. Personal things,” says one recent video promoting the iPhone and its data protections. “And they should belong to you. Simple as that.”

There is a sense of necessity, even of wisdom, about these shifts. After all, consumers have become less willing to pay out for iteratively improved phones, so new ways of making money from the phones they already have must be found. The idea is to expand the Apple ecosystem so far that consumers never need to – or never can – leave it.

But this is undeniably risky terrain for Apple and Cook. The economics of services, and particularly of content creation, are very different from those of hardware. This was demonstrated by the almost simultaneous launches of Apple TV and its competitor service Disney in late 2019. Apple spent a lot of money on its shows, hiring famous actors and filmmakers, but the critical and popular reception has been lukewarm at best. Disney, having spent no less money, was also able to call upon a wide range of old favourites and newer franchises, such as The Simpsons, Star Wars and Marvel’s cinematic universe – and is succeeding accordingly.

Apple’s traditional approach has been to make products that feel distinctively Apple and that are, at least in part, desirable because of that. But distinctiveness and desirability are harder to pin down when it comes to the shows that are being made for Apple TV . What can Apple do that Netflix or the BBC cannot? Can it be different, or, for the first time, will it just be the same?

“I honestly don’t know how they will distinguish themselves from Netflix,” one studio executive told us. “When Apple TV was launched, it was surprisingly light on content. There was no archive, no back catalogue.”

And there are other risks facing Apple, many of which are of Cook’s own making. Its emphasis on privacy, while laudable, lays it open to the charge of hypocrisy: third-party iPhone apps have already been found spreading data in ways that contravene Apple’s declared ideals. Meanwhile, its main manufacturing base is a country – yes, China – that has become the frontline in an ongoing trade war, and a war over free speech and censorship.

In China, too, Apple is being outpaced by companies like Huawei – and this has an effect on its bottom line. Although Apple’s sales revenues are still monumental, at $260 billion in the year ending September 2019, they are lower than those achieved in the previous year.

When Apple was founded, it was a riposte to the dominant, mainframe thinking of the grand dame of American computing, IBM. But now, over 40 years later, it is a titan itself; it can no longer rely on or represent the shock of the new. Life in Cook’s empire is certainly more prosperous now, but it is also less certain. Behind that futuristic-looking ring in Cupertino is the biggest secret of all: this is a company in the grip of a mid-life crisis.

Credits

Reporters: Peter Hoskin and Alexi Mostrous

Editors: Basia Cummings, David Taylor, James Harding

Graphics and design: Chris Newell

Additional research: Ella Hollowood

Picture editor: Jon Jones

All pictures: Getty Images

how-to-create-the-apple-fifth-avenue-cube-in-webgl

Apple Fifth Avenue Cube featured image

From our sponsor: Be Theme offers you 450 pre-built websites that you can easily customize.

In September 2019 Apple reopened the doors of its historic store in the Fifth Avenue and to celebrate the special event it made a landing page with a really neat animation of a cube made of glass. You can see the original animation in this video.

What caught my attention is the way they played with the famous glass cube to make the announcement.

As a Creative Technologist I constantly experiment and study the potential of web technologies, and I thought it might be interesting to try to replicate this using WebGL.

In this tutorial I’m going to explain step-by-step the techniques I used to recreate the animation.

You will need an intermediate level of knowledge of WebGL. I will omit some parts of the code for brevity and assume you already know how to set up a WebGL application. The techniques I’m going to show are translatable to any WebGL library / framework.

Since WebGL APIs are very verbose, I decided to go with Regl for my experiment:

Regl is a new functional abstraction for WebGL. Using Regl is easier than writing raw WebGL code because you don’t need to manage state or binding; it’s also lighter and faster and has less overhead than many existing 3d frameworks.

Drawing the cube

The first step is to create the program to draw the cube.

Since the shape we’re going to create is a prism made of glass, we must guarantee the following characteristics:

  • It must be transparent
  • Cube internal faces must reflect the internal content
  • The cube edges must distort the internal content

Front and back faces

In order to get what we want, at render time we’ll draw the shape in two passes:

  1. In the first pass we’ll draw only the back faces with the internal reflection.
  2. In the second pass we’ll draw the front faces with the content after being masked and distorted at the edges.

Draw the shape in two passes means nothing but calling the WebGL program two times, but with a different configuration. WebGL has the concept of front facing and back facing and this gives us the ability to decide what to draw turning on the culling face feature.

With that feature turned on, WebGL defaults to “culling” back facing triangles. “Culling” in this case is a fancy word for “not drawing”.

WebGL Fundamentals

// draw front faces
gl.enable(gl.CULL_FACE);
gl.cullFace(gl.BACK);

// draw back faces
gl.enable(gl.CULL_FACE);
gl.cullFace(gl.FRONT);

Now that we have gone through the part of setting up the program, let’s start to render the cube.

Coloured borders

What we want to obtain is a transparent shape with coloured borders. From a flat white cube, in the first step we’ll add the rainbow color and then we’ll mask it with the borders:

First of all create the GLSL function that returns the rainbow:

const float PI2 = 6.28318530718;

vec4 radialRainbow(vec2 st, float tick) {
  vec2 toCenter = vec2(0.5) - st;
  float angle = mod((atan(toCenter.y, toCenter.x) / PI2)   0.5   sin(tick), 1.0);

  // colors
  vec4 a = vec4(0.15, 0.58, 0.96, 1.0);
  vec4 b = vec4(0.29, 1.00, 0.55, 1.0);
  vec4 c = vec4(1.00, 0.0, 0.85, 1.0);
  vec4 d = vec4(0.92, 0.20, 0.14, 1.0);
  vec4 e = vec4(1.00, 0.96, 0.32, 1.0);

  float step = 1.0 / 10.0;

  vec4 color = a;

  color = mix(color, b, smoothstep(step * 1.0, step * 2.0, angle));
  color = mix(color, a, smoothstep(step * 2.0, step * 3.0, angle));
  color = mix(color, b, smoothstep(step * 3.0, step * 4.0, angle));
  color = mix(color, c, smoothstep(step * 4.0, step * 5.0, angle));
  color = mix(color, d, smoothstep(step * 5.0, step * 6.0, angle));
  color = mix(color, c, smoothstep(step * 6.0, step * 7.0, angle));
  color = mix(color, d, smoothstep(step * 7.0, step * 8.0, angle));
  color = mix(color, e, smoothstep(step * 8.0, step * 9.0, angle));
  color = mix(color, a, smoothstep(step * 9.0, step * 10.0, angle));

  return color;
}

#pragma glslify: export(radialRainbow);

Glslify is a node.js-style module system that lets us split GLSL code into modules.

https://github.com/glslify/glslify

Before going ahead, let’s talk a bit about gl_FragCoord.

Available only in the fragment language, gl_FragCoord is an input variable that contains the window canvas relative coordinate (x, y, z, 1/w) values for the fragment.

khronos.org

If you notice, the function radialRainbow needs a variable called st as first parameter, whose values must be the pixel coordinates relative to the canvas and, like UVs, go between 0 and 1. The variable st is the result of the division of gl_FragCoord by the resolution:

/**
 * gl_FragCoord: pixel coordinates
 * u_resolution: the resolution of our canvas
 */
vec2 st = gl_FragCoord.xy / u_resolution;

The following image explains the difference between using UVs and st.

Once we’re able to render the radial gradient, let’s create the function to get the borders:

float borders(vec2 uv, float strokeWidth) {
  vec2 borderBottomLeft = smoothstep(vec2(0.0), vec2(strokeWidth), uv);

  vec2 borderTopRight = smoothstep(vec2(0.0), vec2(strokeWidth), 1.0 - uv);

  return 1.0 - borderBottomLeft.x * borderBottomLeft.y * borderTopRight.x * borderTopRight.y;
}

#pragma glslify: export(borders);

And then our final fragment shader:

precision mediump float;

uniform vec2 u_resolution;
uniform float u_tick;

varying vec2 v_uv;
varying float v_depth;

#pragma glslify: borders = require(borders.glsl);
#pragma glslify: radialRainbow = require(radial-rainbow.glsl);

void main() {
  // screen coordinates
  vec2 st = gl_FragCoord.xy / u_resolution;

  vec4 bordersColor = radialRainbow(st, u_tick);

  // opacity factor based on the z value
  float depth = clamp(smoothstep(-1.0, 1.0, v_depth), 0.6, 0.9);

  bordersColor *= vec4(borders(v_uv, 0.011)) * depth;

  gl_FragColor = bordersColor;
}

Drawing the content

Please note that the Apple logo is a trademark of Apple Inc., registered in the U.S. and other countries. We are only using it here for demonstration purposes.

Now that we have the cube, it’s time to add the Apple logo and all texts.

If you notice, the content is not only rendered inside the cube, but also on the three back faces as reflection – that means render it four times. In order to keep the performance high, we’ll draw it only once off-screen at render time to then use it in the various fragments.

In WebGL we can do it thanks to the FBO:

The frame buffer object architecture (FBO) is an extension to OpenGL for doing flexible off-screen rendering, including rendering to a texture. By capturing images that would normally be drawn to the screen, it can be used to implement a large variety of image filters, and post-processing effects.

Wikipedia

In Regl it’s pretty simple to play with FBOs:

...

// here we'll put the logo and the texts
const textures = [
  ...
]

// we create the FBO
const contentFbo = regl.framebuffer()

// animate is executed at render time
const animate = ({viewportWidth, viewportHeight}) => {
  contentFbo.resize(viewportWidth, viewportHeight)

  // we tell WebGL to render off-screen, inside the FBO
  contentFbo.use(() => {
    /**
     * – Content program
     * It'll run as many times as the textures number
     */
    content({
      textures
    })
  })

  /**
   * – Cube program
   * It'll run twice, once for the back faces and once for front faces
   * Together with front faces we'll render the content as well
   */
  cube([
    {
      pass: 1,
      cullFace: 'FRONT',
    },
    {
      pass: 2,
      cullFace: 'BACK',
      texture: contentFbo, // we pass the FBO as a normal texture
    },
  ])
}

regl.frame(animate)

And then update the cube fragment shader to render the content:

precision mediump float;

uniform vec2 u_resolution;
uniform float u_tick;
uniform int u_pass;
uniform sampler2D u_texture;

varying vec2 v_uv;
varying float v_depth;

#pragma glslify: borders = require(borders.glsl);
#pragma glslify: radialRainbow = require(radial-rainbow.glsl);

void main() {
  // screen coordinates
  vec2 st = gl_FragCoord.xy / u_resolution;

  vec4 texture;
  vec4 bordersColor = radialRainbow(st, u_tick);

  // opacity factor based on the z value
  float depth = clamp(smoothstep(-1.0, 1.0, v_depth), 0.6, 0.9);

  bordersColor *= vec4(borders(v_uv, 0.011)) * depth;

  if (u_pass == 2) {
    texture = texture2D(u_texture, st);
  }

  gl_FragColor = texture   bordersColor;
}

Masking

In the Apple animation every cube face shows a different texture, that means that we have to create a special mask that follows the cube rotation.

We’ll render the informations to mask the textures inside an FBO that we’ll pass to the content program.

To each texture let’s associate a different maskId – every ID corresponds to a color that we’ll use as test-data:

const textures = [
  {
    texture: logoTexture,
    maskId: 1,
  },
  {
    texture: logoTexture,
    maskId: 2,
  },
  {
    texture: logoTexture,
    maskId: 3,
  },
  {
    texture: text1Texture,
    maskId: 4,
  },
  {
    texture: text2Texture,
    maskId: 5,
  },
]

To make each maskId correspond to a colour, we just have to convert it in binary and then read it as RGB:

MaskID 1 => [0, 0, 1] => Blue
MaskID 2 => [0, 1, 0] => Lime
MaskID 3 => [0, 1, 1] => Cyan
MaskID 4 => [1, 0, 0] => Red
MaskID 5 => [1, 0, 1] => Magenta

The mask will be nothing but our cube with the faces filled with one of the colours shown above – obviously in this case we just need to draw the front faces:

...

maskFbo.use(() => {
  cubeMask([
    {
      cullFace: 'BACK',
      colorFaces: [
        [0, 1, 1], // front face => mask 3
        [0, 0, 1], // right face => mask 1
        [0, 1, 0], // back face => mask 2
        [0, 1, 1], // left face => mask 3
        [1, 0, 0], // top face => mask 4
        [1, 0, 1], // bottom face => mask 5
      ]
    },
  ])
});

contentFbo.use(() => {
  content({
    textures,
    mask: maskFbo
  })
})

...

Our mask will look like this:

Now that we have the mask available inside the fragment of the content program, let’s write down the test:

precision mediump float;

uniform vec2 u_resolution;
uniform sampler2D u_texture;
uniform int u_maskId;
uniform sampler2D u_mask;

varying vec2 v_uv;

void main() {
  vec2 st = gl_FragCoord.xy / u_resolution;

  vec4 texture = texture2D(u_texture, v_uv);

  vec4 mask = texture2D(u_mask, st);

  // convert the mask color from binary (rgb) to decimal
  int maskId = int(mask.r * 4.0   mask.g * 2.0   mask.b * 1.0);

  // if the test passes then draw the texture
  if (maskId == u_maskId) {
    gl_FragColor = texture;
  } else {
    discard;
  }
}

Distortion

The distortion at the edges is that characteristic that gives the feeling of a glass material.

The effect is achieved by simply shifting the pixels near the edges towards the center of each face – the following video shows how it works:

For each pixel to move we need two pieces of information:

  1. How much to move the pixel
  2. The direction in which we want to move the pixel

These two pieces of information are contained inside the Displacement Map which, as before for the mask, we’ll store in an FBO that we’ll pass to the content program:

...

displacementFbo.use(() => {
  cubeDisplacement([
    {
      cullFace: 'BACK'
    },
  ])
});

contentFbo.use(() => {
  content({
    textures,
    mask: maskFbo,
    displacement: displacementFbo
  })
})

...

The displacement map we’re going to draw will look like this:

Let’s see in detail how it’s made.

The green channel is the length, that is how much to move the pixel – the greener the greater the displacement. Since the distortion must be present only at the edges, we just have to draw a green frame on each face.

To get the green frame we just have to reuse the border function and put the result on the gl_FragColor green channel:

precision mediump float;

varying vec2 v_uv;

#pragma glslify: borders = require(borders.glsl);

void main() {
  // Green channel – how much to move the pixel
  float length = borders(v_uv, 0.028)   borders(v_uv, 0.06) * 0.3;

  gl_FragColor = vec4(0.0, length, 0.0, 1.0);
}

The red channel is the direction, whose value is the angle in radians. Finding this value is more tricky because we need the position of each point relative to the world – since our cube rotates, even the UVs follow it and therefore we loose any reference. In order to compute the position of every pixel in relation to the center we need two varying variables from the vertex shader:

  1. v_point: the world position of the current pixel.
  2. v_center: the world position of the center of the face.

The vertex shader:

precision mediump float;

attribute vec3 a_position;
attribute vec3 a_center;
attribute vec2 a_uv;

uniform mat4 u_projection;
uniform mat4 u_view;
uniform mat4 u_world;

varying vec3 v_center;
varying vec3 v_point;
varying vec2 v_uv;

void main() {
  vec4 position = u_projection * u_view * u_world * vec4(a_position, 1.0);
  vec4 center = u_projection * u_view * u_world * vec4(a_center, 1.0);

  v_point = position.xyz;
  v_center = center.xyz;
  v_uv = a_uv;

  gl_Position = position;
}

At this point, in the fragment, we just have to find the distance from the center, calculate the relative angle in radians and put the result on the gl_FragColor red channel – here the shader updated:

precision mediump float;

varying vec3 v_center;
varying vec3 v_point;
varying vec2 v_uv;

const float PI2 = 6.283185307179586;

#pragma glslify: borders = require(borders.glsl);

void main() {
  // Red channel – which direction to move the pixel
  vec2 toCenter = v_center.xy - v_point.xy;
  float direction = (atan(toCenter.y, toCenter.x) / PI2)   0.5;

  // Green channel – how much to move the pixel
  float length = borders(v_uv, 0.028)   borders(v_uv, 0.06) * 0.3;

  gl_FragColor = vec4(direction, length, 0.0, 1.0);
}

Now that we have our displacement map, let’s update the content fragment shader:

precision mediump float;

uniform vec2 u_resolution;
uniform sampler2D u_texture;
uniform int u_maskId;
uniform sampler2D u_mask;

varying vec2 v_uv;

void main() {
  vec2 st = gl_FragCoord.xy / u_resolution;

  vec4 displacement = texture2D(u_displacement, st);
  // get the direction by taking the displacement red channel and convert it in a vector2
  vec2 direction = vec2(cos(displacement.r * PI2), sin(displacement.r * PI2));
  // get the length by taking the displacement green channel
  float length = displacement.g;

  vec2 newUv = v_uv;
  
  // calculate the new uvs
  newUv.x  = (length * 0.07) * direction.x;
  newUv.y  = (length * 0.07) * direction.y;

  vec4 texture = texture2D(u_texture, newUv);

  vec4 mask = texture2D(u_mask, st);

  // convert the mask color from binary (rgb) to decimal
  int maskId = int(mask.r * 4.0   mask.g * 2.0   mask.b * 1.0);

  // if the test passes then draw the texture
  if (maskId == u_maskId) {
    gl_FragColor = texture;
  } else {
    discard;
  }
}

Reflection

Since reflection is quite a complex topic, I’ll just give you a quick introduction on how it works so that you can more easily understand the source I shared.

Before continuing, it’s necessary to understand the concept of camera in WebGL. The camera is nothing but the combination of two matrices: the view and projection matrix.

The projection matrix is used to convert world space coordinates into clip space coordinates. A commonly used projection matrix, the perspective matrix, is used to mimic the effects of a typical camera serving as the stand-in for the viewer in the 3D virtual world. The view matrix is responsible for moving the objects in the scene to simulate the position of the camera being changed.

developer.mozilla.org

 I suggest that you also get familiar with these concepts before we dig deeper:

In a 3D environment, reflections are obtained by creating a camera for each reflective surface and placing it accordingly based on the position of the viewer – that is the eye of the main camera.

In our case, every face of the cube is a reflective surface, that means we need 6 different cameras whose position depends on the viewer and the cube rotation.

WebGL Cubemaps

Every camera generates a texture for each inner face of the cube. Instead of creating a single framebuffer for every face, we can use the cube mapping technique.

Another kind of texture is a cubemap. It consists of 6 textures representing the 6 faces of a cube. Instead of the traditional texture coordinates that have 2 dimensions, a cubemap uses a normal, in other words a 3D direction. Depending on the direction the normal points one of the 6 faces of the cube is selected and then within that face the pixels are sampled to produce a color.

WebGL Fundamentals

So we just have to store what the six cameras “see” in the right cell – this is how our cubemap will look like:

Let’s update our animate function by adding the reflection:

...

// this is a normal FBO
const contentFbo = regl.framebuffer()

// this is a cube FBO, that means it composed by 6 textures
const reflectionFbo = regl.framebufferCube(1024)

// animate is executed at render time
const animate = ({viewportWidth, viewportHeight}) => {
  contentFbo.resize(viewportWidth, viewportHeight)

  contentFbo.use(() => {
    ...
  })

  /**
   * – Reflection program
   * we'll iterate 6 times over the reflectionFbo and draw inside the 
   * result of each camera
   */
  reflection({
    reflectionFbo,
    cameraConfig,
    texture: contentFbo
  })

  /**
   * – Cube program
   * with the back faces we'll render the reflection as well
   */
  cube([
    {
      pass: 1,
      cullFace: 'FRONT',
      reflection: reflectionFbo,
    },
    {
      pass: 2,
      cullFace: 'BACK',
      texture: contentFbo,
    },
  ])
}

regl.frame(animate)

And then update the cube fragment shader.

In the fragment shader we need to use a samplerCube instead of a sampler2D and use textureCube instead of texture2D. textureCube takes a vec3 direction so we pass the normalized normal. Since the normal is a varying and will be interpolated we need to normalize it.

WebGL Fundamentals

precision mediump float;

uniform vec2 u_resolution;
uniform float u_tick;
uniform int u_pass;
uniform sampler2D u_texture;
uniform samplerCube u_reflection;

varying vec2 v_uv;
varying float v_depth;
varying vec3 v_normal;

#pragma glslify: borders = require(borders.glsl);
#pragma glslify: radialRainbow = require(radial-rainbow.glsl);

void main() {
  // screen coordinates
  vec2 st = gl_FragCoord.xy / u_resolution;

  vec4 texture;
  vec4 bordersColor = radialRainbow(st, u_tick);

  // opacity factor based on the z value
  float depth = clamp(smoothstep(-1.0, 1.0, v_depth), 0.6, 0.9);

  bordersColor *= vec4(borders(v_uv, 0.011)) * depth;

  // if u_pass is 1, we're drawing back faces
  if (u_pass == 1) {
    vec3 normal = normalize(v_normal);
    texture = textureCube(u_reflection, normal);
  }

  // if u_pass is 1, we're drawing back faces
  if (u_pass == 2) {
    texture = texture2D(u_texture, st);
  }

  gl_FragColor = texture   bordersColor;
}

Conclusion

This article may give you a general idea of the techniques I used to replicate the Apple animation. If you want to learn more, I suggest you download the source and have a look ot how it works. If you have any questions, feel free to ask me on Twitter (@lorenzocadamuro); hope you have enjoyed it!

apple’s-latest-itp-updates:-what-marketers-need-to-know

Apple’s WebKit team is out with another update to Intelligent Tracking Prevention (ITP) for Safari that targets potential tracking workarounds.

In a blog post titled “Preventing Tracking Prevention Tracking,” WebKit’s John Wilander laid out three updates to fight detection of “which content and website data is treated as capable of tracking” and “improve tracking prevention in general.”

First, some background on Safari, ITP and cookie blocking. Safari has long restricted entities from setting third-party cookies if they don’t already have first-party relationships with users. Then ITP came on the scene in 2017 to identify and limit cookies of any type that have the ability to track users across sites. This severely limits cookie pools for audience targeting, including retargeting campaigns. Furthermore, it limits analytics and attribution data from Safari, which means marketers lose visibility into how their campaigns are performing with typically high-value iOS users.

If you thought Safari and ITP’s previous iterations had pretty well done in third-party cookies, you’d be right, but there are more holes to plug. The updates below apply to Safari on iOS and iPadOS 13.3, Safari 13.0.3 on macOS Catalina, Mojave, and High Sierra.

Cross-site request referer headers

The change. “ITP now downgrades all cross-site request referrer headers to just the page’s origin. Previously, this was only done for cross-site requests to classified domains.”

What is a cross-site request header referrer? When a user loads a web page with embedded content from another domain, as in a tracking pixel, the request header referrer for the tracking domain will no longer contain the full web address of the host page, only the domain name. That used to be done only for sites classified as trackers.

What the change means. Of the updates, this is the one that will have analytics implications. If a user loads a page from one web site with assets embedded from another, Safari will strip out the URL details contained in the request referrer header.

This means analytics will only show the referring domain, not the referring page.

Example. A user loads a page with assets from https://images.example via https://store.example/baby/strollers/deluxe-stroller-navy-blue.html. In Safari, the referrer header value will not contain that entire URL path. It will only include the root domain https://store.example/.

In this case, analytics provided by https://images.example would only record https://store.example as the referrer and not the full referrer path of /baby/strollers/deluxe-stroller-navy-blue.html.

(More) third-party cookie blocking

The change. “ITP will now block all third-party requests from seeing their cookies, regardless of the classification status of the third-party domain, unless the first-party website has already received user interaction.”

What the change means. This is really aimed at preventing attackers from “seeing their cookies.” It is minor from a marketer’s perspective but further reinforces the need for first-party relationships with users. If you have widgets placed on other sites, it doesn’t matter what your domain classification is, you will need to have a prior first-party relationship with a user in order to see your cookies on those sites. This has been the case in most contexts already.

Example. A user clicks on a YouTube video embedded on a news site. If that user has not previously logged into or visited and accepted cookies at YouTube.com, YouTube will not be able to track engagement from that site.

If you’re not a heavily trafficked site like YouTube and count on tracking from widget embeds, you have little to no visibility into Safari users.

Storage Access API update

The change. “As of this ITP update, the Storage Access API takes Safari’s cookie policy into consideration when handling calls to document.hasStorageAccess().

Now a call to document.hasStorageAccess() may resolve with false for one of two reasons:

  1. Because ITP is blocking cookies and explicit storage access has not been granted.
  2. Because the domain doesn’t have cookies and thus the cookie policy is blocking cookies.”

What is the Storage Access API? This API enables third-party embedded content to gain access to storage that is typically only accessible in a first-party context. With the Storage Access API, embedded items can determine if they have access and request it from the browser’s user agent.

Typically browsers will not give third-party embedded resources access to the same set of cookies and site storage. And document.hasStorageAccess() indicates whether the document has access to its first-party storage.

What it means. This, too, is aimed at attackers and will have little marketing implication. Detlef Johnson, Search Engine Land’s resident technical SEO expert, explained it this way, “The Storage Access API change is about closing a gap of a false positive API response pertaining to the third-party cookie policy of a given website. For an example of another attack vector, an attacker could previously figure out if YouTube is classified by ITP as a tracker or not by making a malicious request and testing for side effects of whether cookies were sent or not.”

Why we care

It’s important to understand how Safari and ITP affect your ability to target and measure ad campaigns.

Apple is not an ad-driven business and has staked its branding on protecting user privacy. As ITP’s restrictions have evolved, advertisers have had to continue to adjust expectations as Safari becomes a bigger black hole. Publishers and third-party adtech firms have felt the pinch. A recent report by The Information (subscription required) found CPMs for Safari users have plummeted as a result of not being able to sell ads based on cookied browsing behavior, while CPMs for typically less-valuable Google Chrome users have ticked up.



About The Author

Ginny Marvin is Third Door Media’s Editor-in-Chief, running the day to day editorial operations across all publications and overseeing paid media coverage. Ginny Marvin writes about paid digital advertising and analytics news and trends for Search Engine Land, Marketing Land and MarTech Today. With more than 15 years of marketing experience, Ginny has held both in-house and agency management positions. She can be found on Twitter as @ginnymarvin.



apple-changes-crimea-map-to-meet-russian-demands

Apple iPhoneImage copyright
Getty Images

Apple has complied with Russian demands to show the annexed Crimean peninsula as part of Russian territory on its apps.

Russian forces annexed Crimea from Ukraine in March 2014, drawing international condemnation.

The region, which has a Russian-speaking majority, is now shown as Russian territory on Apple Maps and its Weather app, when viewed from Russia.

But the apps do not show it as part of any country when viewed elsewhere.

Image copyright
APPLE WEATHER

Image caption

The Apple Weather app now lists Crimea as part of Russia

Image copyright
Apple Maps

Image caption

Apple Maps does not show a border between Crimea and Russia

The State Duma, the Russian parliament’s lower house, said in a statement: “Crimea and Sevastopol now appear on Apple devices as Russian territory.”

Russia treats the naval port city of Sevastopol as a separate region.

The BBC tested several iPhones in Moscow and it appears the change affects devices set up to use the Russian edition of Apple’s App Store.

Apple had been in talks with Russia for several months over what the State Duma described as “inaccuracy” in the way Crimea was labelled.

The tech giant originally suggested it could show Crimea as undefined territory – part of neither Russia nor Ukraine.

But Vasily Piskaryov, chairman of the Duma security and anti-corruption committee, said Apple had complied with the Russian constitution.

He said representatives of the company were reminded that labelling Crimea as part of Ukrainian territory was a criminal offence under Russian law, according to Interfax news agency.

“There is no going back,” Mr Piskaryov said. “Today, with Apple, the situation is closed – we have received everything we wanted.”

He said Russia was always open to “dialogue and constructive co-operation with foreign companies”.

Apple has not yet commented on the decision.

Google, which also produces a popular Maps app, also shows Crimea as belonging to Russia when viewed from the country. The changes happened in March.

When Google Maps is viewed from Ukraine, the maps show no clear border between Crimea and Ukraine but also no border between Crimea and Russia, according to BBC Monitoring.

Most of the international community, including the EU and the US, does not recognise the annexation of Crimea to Russia.

The loss of Crimea is a deep wound for Ukrainians. Shortly after the peninsula was annexed in early 2014, a separate conflict broke out in the eastern Donetsk and Luhansk regions when separatists moved against the Ukrainian state.

Ukraine and the West accuse Russia of sending its troops to the region and arming the separatists.

Moscow denies this but says that Russian volunteers are helping the rebels. More than 13,000 people have been killed in the conflict.

The BBC does not show Crimea as part of Russia on its maps, but shows a dotted line to mark disputed territory.

You might also be interested in:

Media playback is unsupported on your device

Media captionHow Russia wants to control the internet

By Amber Neely

Thursday, November 21, 2019, 07:28 am PT (10:28 am ET)

On November 17, Apple removed the “Ratings & Reviews” section from all product pages on the Apple website. It is currently unclear what has prompted this decision, nor when Apple will bring back the option to read the opinions of other customers at the time of purchase.



Apple removes all buyer reviews from its product pages over the past weekend


AppleInsider received a tip from a reader who had noted the buyer review section was missing on Apple’s online retail store page. The user also pointed out that the pages have been removed from U.S., U.K., and Australian Apple online stores, which suggests this is not simply a mistake, but rather an intentional move on Apple’s behalf.

The reviews were pulled over the weekend, though it’s not clear as to why this has happened. Apple had been known for leaving up even especially negative reviews, which demonstrated both transparency and integrity to their customers.

By removing the reviews, it’s possible that Apple will be seen as less credible to potential buyers.

Utilizing the Wayback Machine, AppleInsider found that the reviews were pulled at some point between the evening of November 16 and the morning of November 17. The image below shows a capture of the sections on November 16 and 17, highlighting the missing “Rating & Reviews” section.

Capture of two different days, showing the missing reviews section


AppleInsider has contacted Apple for clarification over the feature’s removal, and if it would be making a return.

It is possible to view the changes by looking at the Wayback Machine archive page for the original Apple Pencil.

A YouTube video offered as part of the tip was published by the popular photography account, Fstoppers, titled “Apple Fanboys, Where is your God now?” In the video, the host reads a selection of negative reviews of the new 16-inch MacBook Pro, with the video published on November 16, coinciding with the removal of the website feature.

However, it remains to be seen if the video had anything to do with Apple’s decision to remove the reviews, given the 56 thousand page views at the time of publication doesn’t seem like a high-enough number for Apple to pay attention to the video’s content. Other videos have been more critical about the company’s products, and some with far higher view counts, but evidently Apple seemingly does not spend that much time involving itself with such public complaints.

the-real-reason-apple-and-google-still-hold-big-launch-events

Google has launched its latest flagship phones, Pixel 4 and 4XL. Although the new models feature relatively marginal improvements to their predecessors, the launch was staged with much fanfare by Google, as if it represented a major breakthrough for the company and the smartphone market—despite most of the product specs being leaked before the event. The launch was just the latest in a series of product launches by leading digital tech companies that sharply overstated recent innovations.

On September 10, for instance, Apple introduced three new iPhones; revamped Apple Watches; and two new subscriptions services, TV and Apple Arcade. Two weeks later, Amazon presented a long list of new gadgets at its Alexa event. All of these launches have something in common: The “novelties” they introduce are merely iterations of their existing product offering, yet they are presented as revolutionary.

Exaggeration does not come as a surprise in marketing and advertisement. Yet digital corporations pursue a precise strategy with their product launches. The main goal of these events is not so much introducing specific gadgets. It is to position these companies at the centre of the aura that the so-called digital revolution has acquired for billions of users—and customers—around the world.

A long history

Launching new technology devices through public events predates Silicon Valley. Alexander Graham Bell and Guglielmo Marconi, two of the most popular inventors and entrepreneurs in the late 19th and early 20th century, organized events to present the telephone and wireless telegraphy.

Alexander Graham Bell launching the long-distance telephone line from New York to Chicago in 1892. [Photo: United States Library of Congress]

Like today, launches of new products helped shape public opinion and to make a name for companies such as AT&T, Marconi, and Edison. They were even used to fight commercial wars. At the end of the 19th century, Edison launched a campaign of public events to promote his direct current standard against the rival alternating current. The company even electrocuted animals (like the elephant Topsy) in front of journalists to demonstrate that the other standard was dangerous. The audience at these events were mainly scientists or technical experts, but they were also attended by politicians, entrepreneurs, and even kings and queens. The celebrated American inventor Thomas Edison went one step further, presenting his new products in public events such as international exhibitions and tech fairs.

More recently, Steve Jobs followed the footsteps of these inventor-entrepreneurs and codified a “genre”—the so-called keynote. Alone on stage and wearing a roll neck and jeans (an informal “uniform” for geeks), Jobs launched several Apple products in front of audiences of tech enthusiasts. These events helped build the myth of Steve Jobs and Apple.

What product launches are really about

Jobs’s talent was more in the marketing and promoting of new devices than in developing technology. Since the 1980s, Apple’s founder recognized the power of a new vision surrounding digital technologies. This vision saw the personal computer and later the internet as harbingers of a new era.

It was a powerful cultural myth centered around the idea that we are experiencing a digital “revolution,” a concept traditionally associated with political change that now came to describe the impact of new technology. In this context, Jobs carefully staged his launches in order to present Apple as the embodiment of this myth.

Take, for instance, Apple’s famous 2007 iPhone launch. Jobs started his talk arguing that “every once in a while, a revolutionary product comes along that changes everything.” His examples included key moments from Apple’s corporate history: The Macintosh reinvented “the entire computer industry” in 1984, the iPod changed the “entire music industry” in 2001, and the iPhone was about to “reinvent the phone.”

This is a narrow account of technological change, to say the least. Believing that one single device brought about a digital revolution is like seeing a crowd of people in Times Square and assuming they turned up because you broadcast on WhatsApp that everyone should go there. It is, however, a convenient point of view for huge corporations such as Apple or Google. To keep their position in the digital market, these companies not only need to design sophisticated hardware and software, they also need to nurture the myth that we live in a state of incessant revolution of which they are the key engine.

In our research, we call this myth “corporational determinism,” because like other forms of determinism it poses the idea that one single agent is responsible for all changes. The way that digital media companies like Amazon, Apple, Facebook, and Google communicate to the public is largely an attempt to propagandize this myth.

So you should not be worried if Google’s latest launch did not blow you away. The key function of product launches is not actually to launch products. It is for companies to present themselves as the smartest agents in contemporary society, the protagonists of technological change, and, ultimately, the heroes of the digital revolution.


Simone Natale is a senior lecturer in communication and media studies at Loughborough University. Gabriele Balbi is an associate professor in media studies at Università della Svizzera italian. Paolo Bory is a lecturer in media studies at Università della Svizzera italiana. This story originally appeared on The Conversation

apple:-we’re-not-handing-over-safari-urls-to-tencent,-just-ip-addresses

Cupertino in China Syndrome meltdown

China selfie revolution

Responding to concern that its Safari browser’s defense against malicious websites may reveal the IP addresses of some users’ devices to China-based Tencent, Apple insists that Safari doesn’t reveal a different bit of information, the webpages Safari users visit.

Apple may deny users in China VPN protection, it may deny Hong Kong democracy protesters an app used to avoid police, and it may remove references to Taiwan in localized versions of iOS 13 for the Chinese-controlled Hong Kong and Macau.

But it’s not giving up website visits – readily available to authorities by monitoring ISPs, DNS, and reportedly by backdooring Android apps [PDF] like the Communist Party of China’s “Study the Great Nation” – through browser telemetry.

Since February at least and perhaps longer, Safari’s Safe Browsing framework has been receiving hash prefixes of known malware sites from Google Safe Browsing database, or for users in mainland China, Tencent’s Safe Browsing database.

A 32-bit hash prefix like “ba7816bf” would represent the first eight characters of a 256-bit, 64-character SHA256 digest of a full URL.

Before it loads a requested website, Safari, like other browsers that implement a safe browsing lookup system, will hash the URL of the website to be visited and compare its hash prefix to the received hash segments of malicious sites.

In the event of a match – and there may be several given that hash prefixes aren’t necessarily unique, Safari asks the API provider – Google or Tencent – for all the URLs that match the hash prefix.

Using that fetched list, Safari can then determine whether the intended destination matches anything on the list of malicious websites and present a warning if necessary. And it will do so unless the on-by-default “Fraudulent Website Warning” is disabled using the appropriate iOS or macOS settings menu.

Nothing to see here

And this, Apple contends, is nothing to worry about. In a statement emailed to The Register (!), an Apple spokesperson said:

“Apple protects user privacy and safeguards your data with Safari Fraudulent Website Warning, a security feature that flags websites known to be malicious in nature,” it commented.

“When the feature is enabled, Safari checks the website URL against lists of known websites and displays a warning if the URL the user is visiting is suspected of fraudulent conduct like phishing. To accomplish this task, Safari receives a list of websites known to be malicious from Google, and for devices with their region code set to mainland China, it receives a list from Tencent.”

Tim Cook, photo2 by JStone via Shutterstock

In a touching show of solidarity with the NBA and Blizzard, Apple completely caves to China on HK protest app

READ MORE

“The actual URL of a website you visit is never shared with a safe browsing provider and the feature can be turned off.”

That said, there’s still the issue of user IP addresses, which Tencent would see for those using devices with mainland China settings. That’s a privacy concern, but its one among many given that other Chinese internet companies – ISPs, app providers, cloud service providers, and the like – can be assumed to collect that information and provide it to the Chinese surveillance state on demand.

In a blog post on Sunday, Matthew Green, associate professor of computer science at the Johns Hopkins Information Security Institute, pointed out some potential privacy problems in safe browsing APIs like Google’s.

The privacy community, he said, has mostly come to terms with the privacy trade-off that comes with Google’s Safe Browsing API, figuring the security gains are worth the risk.

“But Tencent isn’t Google,” he said. “While they may be just as trustworthy, we deserve to be informed about this kind of change and to make choices about it. At very least, users should learn about these changes before Apple pushes the feature into production, and thus asks millions of their customers to trust them.” ®

Sponsored:
How to Process, Wrangle, Analyze and Visualize your Data with Three Complementary Tools

the-apple-logo-is-(probably)-going-to-start-looking-a-little-different

Apple’s Apple might be getting a change up, if recent reports are to be believed. A series of leaks ahead of Apple’s upcoming event on 10 September suggest the iconic logo will be getting a makeover on the new iPhone 11. Hugely well known around the world, and frequently ranked amongst the best logos of our time, Apple’s logo is so iconic, it’s no surprise that even slight adjustments cause a stir amongst Apple fans.

So what’s changing? Well, visualise the back of an iPhone. Where does the logo sit? Halfway up, right? Wrong. Although it might appear central, every iteration of the iPhone so far has had the Apple logo placed closer to the top of the phone than the bottom. 

To jog your memory, here’s what the logo usually looks like:

apple logo

How the Apple logo has appeared on the back of iPhones

(Image credit: Apple)

But leaked photos of the newest handset suggest it’ll be shifting to a vertically central position, and the ‘iPhone’ branding at the bottom will be removed.

The below image was tweeted by tech writer Ben Geskin.

apple logo

What the iPhone 11 might look like

(Image credit: Ben Geskin)

The sneaks also reveal a new-look three-lens camera, and an understated matte finish. For those of us used to the current logo placement, this all looks a bit… weird. 

Theories are flying freely as to the thinking behind the move. Some argue that the stripped-back design will help place more emphasis on the camera, where Apple is investing a lot of its time and money in a bid to become the very best camera phone around. Others have speculated that it’s to do with a new reserve wireless charging feature that will enable users to charge up their AirPods on the back of their iPhone handset.

Read our sister website TechRadar’s iPhone 11 report for more on the new phone.

Read more:

making-apple-music-more-social

PROJECT | APPLE MUSIC

Client | DESIGNLAB

Role | UI/UX Designer 

Duration | FOUR WeekS

_____________________________________________________________________________________________

As part of the Designlab program, one of my projects involved designing a new feature within an existing product. The project I chose to work on was for Apple Music, which focussed on increasing social interaction among users. Socialisation has always been a subject of debate amongst many music streaming users and services alike. Spotify, an industry giant, recently pulled out their in-built messaging feature due to low user engagement. I wanted to do an empathetical approach and understand the exact needs of social communication in the interest of ‘ music streaming users’ that would be solved with the ingression of a social feature.

Apple Music’s mission is to help people stream music, videos, radio from their immense music collection across all devices without any interruption, in a legal and accessible way. Majorly an on-demand music service with access to 45 million songs that can be downloaded for offline use based on subscription, Apple Music still lags behind industry favorites like Spotify, Pandora, Google Play Music and Amazon Music in terms of user engagement, retention and popularity. For all these reasons they want to expand the social capabilities of their iOS application which is currently very limited to the creation of a profile; following other users and sharing public playlists. An interactive and engaging feature that let’s their current users stay socially connected in favour of music is much needed to retain existing users. 

‣  Design a new social feature that embeds within the current Apple Music platform for iOS devices. 

    It has to embed well and seemlessly with the rest of the app.

‣  Design additional and complementary feature that could enhance the main feature.

Understanding the need for social communication in music applications | Process of identifying user’s problems and empathising on vital issues

In order to completely understand the problems that can be addressed by developing a social communication system, I fleshed out a research strategy that helped me determine, 

who am I developing the feature for? (demographics of Apple Music’s target audience/existing users).

what do I need to know? (user’s behaviour, usage, motivation and preference by understanding their current method of social engagement for discovering and sharing music including frequency, app preference, context of use). 

what problems should I be addressing? (the needs and pain points of Apple Music users that could be solved with the introduction of a social feature).

what are the existing solutions that has worked or failed? (competitors strategy and services surrounding social engagement).

I started out by conducting a secondary research to identify trending features in music applications that are most preferred and used by music app users; examine the development and success strategy of social communication features in music apps through the years; understand the target audience’s psychographic characteristics to develop a social feature in favour of their behaviour, interests and attitude. 

With the knowledge from preliminary investigation I conducted user interviews to identify target audience’s motivation, needs and issues when using these services; understand their emotional and environmental triggers to music and the need for social bonding. I developed an interview and survey guide to help answer significant questions, such as

‣  How users discover new music?

‣  What they liked/disliked the most using the application?

‣  How much did they identify with their friend(s) interest in music? What type of knowledge transfer fuelled discussions surrounding their interest in music?

‣  How do users currently recommend or share music with others? What are the biggest concerns that they face when doing so?

After conducting empathy research, it was clearly evident that the target segment were serious ‘music addicts’ who had a different outlook, habits and preferences when it came to listening to music in comparison with ‘casual listeners’. They had different drives, motivations and gains from listening to music which were dependent on their surroundings, attitude and exposure. A unanimous concern was the inability of sharing music which they deemed was rudiment and how it affected their need to socially connect with their friends and loved ones over a shared interest. 

The overall research findings helped establish the needs, gains, pain points of music app users and helped identify key areas of improvement which when addressed can help develop a social integration feature in Apple Music app that promises a sustainable user retention growth over time.

Once the problems that needed to be resolved were defined, I set about to establish premise of the project by creating an user experience strategy map to define the guiding principles, challenges, aspiration, focus areas, activities and measurements for success.

Synthesising user research findings

Upon conducting user interviews, I came across users with interesting personalities whose ideologies did not match with the masses. In India, the segment that Apple Music tries to provide for is a very niche, upscale audience and presumably users who prefer listening to international over regional music. I created a persona and an empathy map based on the identified target user segment. The demographics, music preference, personality, goals and frustrations of the persona reflects the interviewees lifestyle; as it should be relatable to the presumed end user.

Since most of the findings from my research overlapped with one another, I needed to identify the right challenges to be addressed with the right solution. Creating a POV problem statement helped me define the problem in question in an empathetic manner to be able to come with possible solutions that will address the needs of the users. I formed HMW questions for each of these needs which acted as a guide in the ideation phase to solve design challenges. This was followed up with a brainstorming activity where I came up with multiple possible solutions for each of the identified need and narrowed down on features that will have the maximum value with the minimum effort to implement.

The ideas that stood out from the ideation session are:

‣  The introduction of ‘Mixtape’ as a form of collaborating playlists and sharing music with friends. A method of sharing music that slowly died with the advent of technology is reminiscent to late Gen-X and early Gen-Y music fanatics. 

‣  Sharing music with followers within the application for easy accessibility. Privacy to the users was an utmost priority and hence a more relaxed version of a messaging system can be built that would help users who wish to be social. It would also help users interact more often thereby increasing the use of the application.

‣ ​​​​​​​ Several pain points concerning ‘personalisation’ was identified during research. An updated onboarding which will let the users provide inputs for their music preference can help build a better recommendation system by understanding a user’s likes and dislikes.

The ideas generated in the brainstorming activity led to the creation of a feature matrix quad which helped narrow down ideas for product features that would generate maximum gain for the user within the feasibility of project timeline. 

I created an Application Map of the existing app along with the newly ideated features to help lay out user flows for each of the identified need and for seamlessly integrating the features into the existing flow for better usability.

After I developed the user flows, I began to flesh out wireframes for each of the observed tasks. At each and every stage of design I had to ensure that the microcopy and chat interface of the application should be familiar to the user, which will enable quicker interactions. 

Once the base framework was ready for prototyping I started to work on the user interface design. Since Apple Music is an existing product, I concentrated more on designing the interface for the chat feature, along with other newly added elements which were a part of the redesigned onboarding section and mixtapes.

I created a working prototype of the screens using Invision. The task flows observed in the prototype are,

a. Sharing music with a friend using ‘Music Share’. 

b. Updated profile page of a user and a follower.

c. Forming a group and creating a Mixtape. 

d. Alternate flow for viewing, creating and sharing a Mixtape. 

e. Changing artists and music preference (onboarding).

High Fidelity Prototype 

(Created using Invision)

Stage 5 | Test & Iteration

A usability testing was conducted using a high fidelity prototype of Apple Music’s application using Sketch Cloud. The test was designed to assess the success of the newly developed ‘Music Share’ feature; its efficiency and usability when performing the designed tasks flows; identify errors and gain user’s feedback on their preferences and recommendations and assess the overall success of the prototype. The overall test findings helped identify key areas for improving user interaction and identity error states. 

An affinity map was created based on errors and issues observed during usability testing. The affinity map is segregated into groups of findings in navigation, interface and usability; content and call to action, preferences from participants and recommended course of action to address the issues. The prototype was further updated to accommodate the changes rolled out in affinity map.

As a music addict myself, working on this project was more interesting and engaging than I expected it to be. Working on a product idea that doesn’t exist in the market and with no references in research to act as a guide, I learnt the different ways of approaching and finding solutions to a problem. At each and every stage of the design thinking process, I had to constantly validate ideas and findings with solid reasoning before proceeding with the next steps. Way forward, the updated prototype needs to be tested with the target segment for uncovering errors and possible improvements in usability and interaction.

Back to Top