three-surprising-lessons-ux-can-learn-from-phenomenology

Ruben Stegbauer

Photo by Giammarco Boscaro on Unsplash

Finally, the holidays have arrived, and with them, the perfect moment to take out that Heidegger, Husserl, or Merleau-Ponty blockbuster that had been gathering dust on the shelves for so long.

Phenomenology, for those who haven’t come across it, is essentially a (set of) philosophical method(s) to describe phenomena and can be used to study anything from a cup of coffee to walking in the forest or surfing the web. Edmund Husserl, the German philosopher who coined the term, called it “a new way of looking at things,” and a way to “learn to see what stands before our eyes”¹.

Surprising how much there is that phenomenologists thought about that we can leverage today to improve how we study user experience.


1. Phenomenology relies on minuscule descriptions of ordinary phenomena, and so does UX research.

Perhaps most obviously, both disciplines study how our being is dependent on relationships with others, the things that surround us, and the tools we manipulate. ‘Thick description’ of a user’s experience is what we are all after- ideally without preconceptions — gathered by observing interaction and usage. We ask participants to speak out loud when interacting with a product and describe their actions from our third-party observer position to report what is often omitted, taken for granted or considered insignificant.

Phenomenologists were not the first ones to realize the importance and significance of the repetitive and mundane (so did the American pragmatists and others). However, they excelled at making it their prime object of study. They would encourage us to employ the power of the written word, description, to uncover the more subtle dimensions of ‘ordinary’ activities and the context of these. Though hard to accomplish, describing observations constitutes one of the most powerful tools at our disposal as social scientists.


2. Just like UX research, phenomenology is all about understanding usage — interacting with something ( e.g., a product) — not the nature of an object itself.

I love phenomenology because, unlike most other philosophical traditions, it takes connection — being — as primordial, rather than seeing it as a consequence of some higher form of abstraction such as reflection or consciousness. In stark contrast to Descartes’ famous ‘cogito ergo sum,’ it also constitutes a central premise of UX research. When we successfully use technology, it becomes an extension of our’ being-in-the-world.’ Just like a captivating book, surfing the web intuitively melts out subjectivity into an activity of ‘manipulating something.’

Thanks to these phenomenological observations, we know that seamless interaction can alter an existing conception, say, of a brand or service. We end up liking and enjoying something because it works well, because we recognize it, and master its usage.

Action is a powerful way to inform preference, or, as Merleau-Ponty put it: “We know not through our intellect but through our experience.


3. Phenomenology, just like UXR, derives its best insights not from measuring success but from describing failure.

Heidegger starts ‘Being in Time’ with the famous example of someone using a hammer to hammer a nail into a cupboard. As long as the hammering works as expected, i.e., smoothly, the hammer remains imperceptible to the person doing the hammering, entirely focused on the nail. However, if something goes wrong -like the nail bending-, the hammer suddenly appears to be a (problematic) object. Heidegger’s Being and Time examines in great detail how the quality of our (user) experience changes depending on how successfully we manipulate our surroundings, including technology.

Examples of this abound in everyday life. As long as typing this essay goes smoothly, I don’t pay attention to the keyboard or the computer I am using. But then, the keyboard suddenly jams, and it stops being ‘invisible’; For the first time, I register it as a discrete object that becomes ‘available,’ as Heidegger puts it. I now perceive it as a concrete item, rather than something that is simply part of my successful intention-pursuit. In UX research, we track these moments with task success metrics by counting clicks, errors, or measuring time.

Phenomenological theory indicates that UX research still has a long way to go to understand, measure, and successfully manipulate the different dimensions of ‘things going wrong.’ Does this bug interrupt the flow of a user’s action to the extent that it could cause significant distress? Could this small keyboard problem make a user doubt the value of the machine as a whole?

As UX researchers, we could also appreciate more the effect usability can have on conditioning preference. Liking and awareness of a brand (positive brand perception), certainly inform the user’s decision. Usability, however, acts inadvertently. Unless we experience failure in our intention-pursuits, we might not even notice the power that a smooth and intuitive user experience has in driving our preferences and how we make decisions online.

Reading old phenomenologists allows us to understand how the meaning of any ‘object’ is intertwined with its context of usage, relatedness to other objects, and to the someone who uses it. Given the practical and goal-oriented underpinnings of UX research, there is still room to incorporate the theoretical knowledge base into our frameworks and benefit from the intellectual heritage that generations of great phenomenological thinkers have left behind.


[1]. Husserl, Ideas I: General Introduction to Pure Phenomenology, trans. by W. R. Boyce-Gibson (London: Collier-Macmillan, 1969), p. 39.

three-ways-to-reduce-http(s)-api-costs-on-aws

Here at GameAnalytics, we receive, store and process game events from 1.2 billion monthly players in nearly 90,000 games. These events all pass through a system we call the data collection API, which forwards the events to other internal systems so that we eventually end up with statistics and graphs on a dashboard, displaying user activity, game revenue, and more.

The data collection API is fairly simple in principle: games send events to us as JSON objects through HTTP POST requests, and we send a short response and take the event from there. Clients either use one of our SDKs or invoke our REST API directly.

sdks to collect diagram

We get approximately five billion requests per day, each typically containing two or three events for a total of a few kilobytes. The response is a simple HTTP 200 “OK” response with a small bit of JSON confirming that the events were received. The general traffic pattern is a high number of relatively short-lived connections: clients send just over two HTTP requests per connection on average. 

So what would you guess is the greatest cost associated with running this system on AWS, with a fleet of EC2 instances behind a load balancer?

We wouldn’t have guessed that the greatest part of the cost is for data transfer out. Data transfer in from the Internet is free, while data transfer to the Internet is charged between 5 and 9 cents per gigabyte.

So we set out to do something about this and see if we could save some money here. We were a bit surprised that we couldn’t find anything written about what to do in this scenario – surely our use case is not entirely uncommon? – so hopefully this will be useful to someone in a similar situation.

1. Reduce HTTP headers

Before these changes, a response from this system would look like this, for a total of 333 bytes:

HTTP/1.1 200 OK
Connection: Keep-Alive
Content-Length: 15
Content-Type: application/json
accept-encoding: gzip
Access-Control-Allow-Origin: *
X-GA-Service: collect
Access-Control-Allow-Methods: GET, POST, OPTIONS
Access-Control-Allow-Headers: Authorization, X-Requested-With, Content-Type, Content-Encoding

{"status":"ok"}

(Remember that the line breaks are CRLF, and thus count as two bytes each.)

Since we would send this five billion times per day, every byte we could shave off would save five gigabytes of outgoing data, for a saving of 25 cents per day per byte removed.

Much of this could simply be removed:

  • The Access-Control-Allow-Methods and Access-Control-Allow-Headers response headers are CORS headers, but they’re only required in responses to preflight requests using the OPTIONS method, so they are superfluous in responses to POST requests.
  • The Access-Control-Allow-Origin is still required, but only when the request is a CORS request, which we can determine by checking for the Origin request header. For any request not sent by a web browser, we can just omit it.
  • The Accept-Encoding header is actually a request header; including it in the response has no meaning.
  • Finally, the X-GA-Service header was once used for debugging, but we don’t use it anymore, so it can be dropped as well.

So for the vast majority of requests, the response would look like this:

HTTP/1.1 200 OK
Connection: Keep-Alive
Content-Length: 15
Content-Type: application/json

{"status":"ok"}

Sending 109 bytes instead of 333 means saving $56 per day, or a bit over $1,500 per month.

So it stands to reason that by reducing data sent to a third, costs for data transfer should drop by 66%, right? Well, costs dropped, but only by 12%. That was a bit underwhelming.

2. Also reduce TLS handshakes

Obviously, before we can send those 109 bytes of HTTP response, we need to establish a TLS session, by exchanging a number of messages collectively known as a “TLS handshake”. We made a request to our service while capturing network traffic with Wireshark, and discovered that it sends out 5433 bytes during this process, the largest part of which is made up of the certificate chain, taking up 4920 bytes.

So reducing the HTTP response, while important, doesn’t have as much impact as reducing TLS handshake transfer size. But how would we do that?

One thing that reduces handshake transfer size is TLS session resumption. Basically, when a client connects to the service for the second time, it can ask the server to resume the previous TLS session instead of starting a new one, meaning that it doesn’t have to send the certificate again. By looking at access logs, we found that 11% of requests were using a reused TLS session. However, we have a very diverse set of clients that we don’t have much control over, and we also couldn’t find any settings for the AWS Application Load Balancer for session cache size or similar, so there isn’t really anything we can do to affect this.

That leaves reducing the number of handshakes required by reducing the number of connections that the clients need to establish. The default setting for AWS load balancers is to close idle connections after 60 seconds, but it seems to be beneficial to raise this to 10 minutes. This reduced data transfer costs by an additional 8%.

3. Check your certificates

Does a certificate chain really need to take up 4920 bytes?

We initially used a certificate from AWS Certificate Manager. It’s very convenient: there is no need to copy files anywhere, the certificate renews itself automatically, and it’s free. The downside is that multiple intermediate certificates are required to establish a trust chain to a root certificate:

  • The gameanalytics.com certificate itself, 1488 bytes
  • An intermediate certificate for “Amazon Server CA 1B”, 1101 bytes
  • An intermediate certificate for “Amazon Root CA 1”, 1174 bytes
  • “Starfield Services Root Certificate Authority”, 1145 bytes (despite the name, this is an intermediate certificate, not a root certificate)

That sums up to 4908 bytes, but each certificate has a 3-byte length field, so the TLS handshake certificate message contains 4920 bytes of certificate data.

So in order to reduce the amount of data we’d need to send in each handshake, we bought a certificate from Digicert instead. The chain is much shorter:

  • The gameanalytics.com certificate itself, 1585 bytes
  • “Digicert SHA2 Secure Server CA”, 1176 bytes

All in all 2767 bytes.

So given that the clients establish approximately two billion connections per day, we’d expect to save four terabytes of outgoing data every day. The actual savings were closer to three terabytes, but this still reduced data transfer costs for a typical day by almost $200.

And further cost reduction opportunities

We’re probably in diminishing returns territory already, but there are some things we haven’t mentioned above:

  • If the clients use HTTP/2, data transfer decreases further, as response headers are compressed. Approximately 4% of our incoming requests are made using HTTP/2, but we don’t really have any way of increasing this percentage. In AWS, Application Load Balancers (ALBs) support HTTP/2 without any configuration required, whereas “classic” load balancers don’t support it at all.
  • We’re currently using an RSA certificate with a 2048-bit public key. We could try switching to an ECC certificate with a 256-bit key instead — presumably most or all clients are compatible already.
  • There is room for decreasing certificate size further. We currently use a wildcard certificate with two subject alternative names; we could save a few bytes by using a dedicated certificate for the one domain name this service uses.
  • Some of the clients use more than one of our APIs. Currently, they are served under different domain names, but by serving them under the same domain name and using ALB listener rules to route requests, the client would only need to establish one TCP connection and TLS session instead of two, thereby reducing the number of TLS handshakes required.
  • If we’re prepared to introduce an incompatible API change, we could start returning “204 No Content” responses to clients. A “204 No Content” response by definition has no response body, so we could drop the {“status”:”ok”} response as well as the Content-Type and Content-Length headers, saving an additional 70 bytes per response, or approximately $17 per day.

Also, the certificate contains lengthy URLs for CRL download locations and OCSP responders, 164 bytes in total. While these are required security features, it could be a selling point for a Certificate Authority to use URLs that are as short as possible. Starfield Technologies is setting a good example here: it uses the host names x.ss2.us and o.ss2.us for these purposes.

Here are some upcoming TLS extensions that would also reduce the size of handshakes:

  • There is an RFC draft for TLS certificate compression.
  • There is also RFC 7924, “Cached Information Extension”, which means that the server doesn’t need to send its certificate chain if the client has seen it earlier. However, this doesn’t seem to have been implemented in any TLS client library, and most likely not supported by AWS load balancers.

So that’s what we’ve learned so far. Remember to check your certificates as they might be bigger than required, increase the idle connection timeout as it’s cheaper to keep an established connection open, and trim your HTTP response headers.

Do you have any thoughts or insights? Let us know by tweeting us here.

P.S. We’re hiring!

Are you a savvy developer looking to work in the cutting-edge of the tech industry? Brilliant – we’re on the lookout for ambitious, bright, and enthusiastic minds to join our growing engineering team. Visit the GameAnalytics careers page to see the benefits we offer and all available roles. And if you don’t see an open position that you’re interested in, drop us an email with your details – we’re more than happy to chat!

three-creatives-on-how-they-translated-the-personality-of-their-work-into-their-website

A website can say a lot about who you are as a creative. More than any other medium, web design and coding offers flexibility, diversity and variety; the possibilities really are endless. Take, for example, a photographer whose portfolio consists of large format images full of details you want to zoom in on. A pared-down site with little-to-no text and a simple image slider will let the work speak for itself and really pull focus. For an illustrator who undertakes several editorial commissions a week, on the other, a homepage populated with rows and rows of images, all explicitly labelled, will convey the volume of work they produce and the calibre of the client for whom they make it.

But it’s not just design you have to think about. The way someone navigates a site can say as much about you as the way they see your work on the page. While this plethora of choice is exciting, it can also be a little paralysing, especially if your experience with web design is limited. If the thought of starting with a blank canvas and translating the personality and passion that goes into your work through code is a little daunting, we’ve spoken to three creatives who’ve wholeheartedly achieved it. Pitch Studios, Dot Pigeon and Calvin Pausania each built their website using Wix, and the results are as varied and idiosyncratic as their portfolios.

Pitch Studios is a Melbourne-based creative studio and a name those who came down to Nicer Tuesdays in September will be familiar with. Known for its digital-first projects and future-facing aesthetics, the studio needed something equally as slick to present its work on. Pitch, therefore, opted for simplicity in both design and interaction. “With a lot of our work, because it’s often surrealist and colourful, we didn’t want a complicated site with lots of distractions. So we opted for something more simple, where the work could really speak for itself (as you can see on the project gallery page),” the studio explains.

With little-to-no information about the studio on its homepage, instead, two icons in the top right-hand corner hint at how users can find projects or contact Pitch. This ensures the first thing anyone notices when arriving at www.pitch-studios.com is the studio’s creative output. Ironically, by stripping back any elements which might reflect the aesthetic of the studio’s work, its website allows the work to take centre stage through juxtaposition. A ticker with links to social accounts injects a sense of fun into the site but, ultimately, “it’s really about showing off the work in the best way possible,” the studio tells us. “As long as it’s functional and workable for the user, your own design flair can follow!”

When designing his website, Milan-based artist Dot Pigeon also opted for simplicity. “The idea behind my artworks is to create simple and comprehensible messages realised in a clean and visually satisfying way,” he tells us. For his website, therefore, he wanted the same approach: “a simple but bold website able to grab attention and entertain people who visit it.” Dot Pigeon started sharing his work on Instagram, posting works inspired by pop culture, politics, icons and trends of the moment and so the layout of his website reflects this, mimicking the square grid of the photographic platform. “It is a sort of homage and at the same time a warning to myself to always remember where I come from,” he adds.

This idea is only furthered by Dot Pigeon’s choice to show nothing but imagery, a header, and a link to his Instagram, as well as a potentially infinite scroll of images. When combined together, all of these design decisions make for a smart website that gives Dot Pigeon’s work a home on its own domain, but which still reflects the platform it originated on.

“In my opinion,” begins Calvin Pausania, “my website is my gallery, even my house. It’s the space that shows me and my profession in a unique way.” As a result, the Dutch director-photographer-designer took consideration when designing his portfolio in order to make sure it wholly embodied the look and feel of his work, as well as the ethos behind it. “The first thing you notice is the overall dark vibe with the moving background and the interactive elements through visuals, animation and text. It’s a way of communicating that I don’t have on social media,” he tells us. While Calvin’s work is often shown on other platforms on a white, clean background, he wanted to take this opportunity to present his imagery exactly as he wanted to.

He continues: “You go outside to meet people and create relationships, but people will get to know more about you when they enter your house. This is what a portfolio website is to me. On social media people get introduced to your work and on your website everything aligns and you can feel what the person is doing.” Calvin’s website is, therefore, a space where he can fully control a visitor’s experience of his work. Finally, he tells us: “I chose to not work with an existing template and did the spacing and sizing of all the work in a very specific way. It’s, in a way, tailor-made to the pages.”

What seems to unite what Pitch Studios, Dot Pigeon and Calvin all say is the notion that, no matter what, you should start by looking at your work. It can definitely be tempting to start playing around with different themes and layouts, but the answer to how best to present your portfolio will most likely be staring you straight in the face. What is it that makes your work unique or distinctive? Once you’ve figured that out, designing a website which represents you and which is imbued with your personality will be easy.

Wix is a world-class website building platform with 150 million users in 190 countries, enabling you to create a professional website and manage your business online. With advanced design capabilities and specialised features, Wix gives you the freedom to design and customise a website that expresses your vision, no matter your brand or business.

ccpa-will-take-effect-in-about-three-months-but-will-consumers-really-care?

The California Consumer Privacy Act (CCPA) goes into effect on January 1, 2020, but it won’t be enforced until July. There has been much hand-wringing, anticipation and attempts to dilute the legislation by industry.

Anticipating a major change

The consensus among marketers and experts is that CCPA represents a major shift for U.S. digital privacy law that will make it tougher for companies to obtain and use data – especially third-party data brokers and programmatic networks. There’s also an expectation that consumers will exercise their new privacy rights under CCPA.

A brief summary of those rights is as follows:

  • The right to obtain disclosure of the categories and specifics of any personal information collected by the site
  • An explanation of how the consumer’s data is used and whether it is being sold
  • The right to opt-out of a sale of the personal data to third parties
  • The right to request that a business delete any personal information
  • The right to not be discriminated against because the consumer has exercised his/her rights under CCPA

GDPR experience a preview of burdensome forms

CCPA is often discussed in the same breath as Europe’s GDPR. At the highest level, the key difference between CCPA and GDPR is the latter’s opt-in requirement for data collection and usage, while CCPA is an opt-out framework. I was recently in Europe and have seen the many and varied GDPR opt-in, cookie permission forms that one encounters upon every visit to a new website.

They’re confusing, sometimes painful and, according to my informal discussions with people in Europe, often met by indifference from consumers simply seeking to get to a desired piece of content. This is not to say that people don’t care about privacy – it’s just that many publishers are putting a significant burden on consumers who don’t fully understand all the cookie categories and their functions.

Cookie choice screens under GDPR

Twitter or find him on LinkedIn.