Lately, some large brands have made some very big rebranding and logo redesigns. Two of these redesigns that are very interesting are those of Volkswagen and Warner Bros. Both of these brands have existed since the early 1900s and have made many major changes in their logo designs. Both of these brands started out with a simplistic, flat logo. This style was also common due to the technical limitation of the time so, over time, their logos became more and more detailed and ‘3D’. When flat and more simplistic logos became more popular in recent years, most brands changed with the trends. However, in some industries like the car industry, 3D logos are still widely used. Mainly because that’s the way they are presented on the cars, making them more recognizable. This is why this is such a big change in terms of logo redesigns.
Volkswagen Logo Redesign
Volkswagen was founded in 1937 in Berlin, Germany. Volkswagen’s history, just like many German companies operating at that time, is very dark. The Volkswagen brand was founded by a nazi labor party and their first logo looked just their current logo in combination with spinning wings that are in the shape of a swastika. They used this logo until after the war when they removed the swastika shape. This is roughly when they began using the logo that looks most like the logo that is used today (first logo in the image above). This form of the logo was used for most of the existence of Volkswagen. When car design trends went from boxy and simplistic to more rounded luxurious, they slowly updated their logo to be more 3D and fitting with the rest of the car. During this time, it was common for most brands to use a 3D logo because it became a lot easier to create a logo like that with modern computers.
Besides Volkswagen, we also saw some other car brands dropping their shiny 3D logos for a flatter design. Some examples are Audi (owned by Volkswagen) and Toyota. It certainly seems like it’s becoming more normal to use a flat logo design in the car industry.
Warner Bros. Logo Redesign
Warner Bros. was founded in 1923 in Burbank, California. Warner Bros. started out as a movie theater business, showing films using a projector. These projectors weren’t as advanced as they are now so their logo was mostly seen in black-and-white, just like the films they showed. When they started producing more films, their logo became more well-known. They didn’t keep their flat, stretched-out logo (first logo in the image above) for very long, only showing it before 4 films before they updated it to look more like the logo that most people know today (second logo in the image above). From 1937 until 2019, their logo didn’t change much. It had some minor tweaks and improvements over the years and they used other logos for some years after the company was sold two times, but they ultimately stuck with the same logo. This changed after Warner Bros. hired famous design firm Pentagram to redesign its logo. This was mostly due to the fact that it was hard to customize or scale the last logo, which was important for Warner Bros’ digital platforms.
My attempt to modernise some learnings from CSS architecture and how to apply them in the context of modern JavaScript applications.
Posted by Mike on (about a 20 minute read).
I went to a JavaScript meet-up in Sydney the other night and asked the question “How many people know what CSS Architecture is?”. The room was full of different people with varying backgrounds and experience. I thought at minimum around a quarter might know what I was talking about. I was hoping more would. To my surprise, only a handful of people raised their hand.
I then asked, “How many people feel the difficulties they have writing CSS at scale have been largely solved by CSS-in-JS?”. They weren’t stupid, they knew this question was a set-up, nonetheless, many of them obliged me and put their hand up.
From the looks in the room, I think many more people felt this way than were willing to admit. At the very least, I think a lot of people believe CSS architecture is no longer relevant in the context of modern JavaScript applications.
That perspective was very kindly put to me by Alex Louden on Twitter:
“Styled Components etc has removed the need for class names completely for me. Now since styles are included in my presentation components (to me at least) it doesn’t feel like a style architecture issue any more?”
This is not to criticise that perspective; it’s completely valid! Alex is saying tooling helped make some of the challenges he had dealing with CSS (in particular specificity) easier. The challenge now is dealing with components in an application.
I understand this perspective, but find it interesting when people see classnames and components as completely different concerns. Classnames and components are just different ways of composing user interfaces. There are still many challenges involved in making good re-useable and scalable front-end systems, no matter how you put them together.
These challenges are not new. In fact, there are some well established solutions in CSS architecture that can be easily transferred to component-based style systems. So why are so few people talking about this? And why are many more people completely unaware of the topic?
I believe there is a lot that has contributed to this and it’s worth reflecting on how we got here…
I believe the initial response to CSS-in-JS, from many leaders in the CSS community hasn’t helped with understanding and knowledge sharing. I’ve often heard comments like “People (i.e. younger JavaScript developers) just need to learn CSS better.” People who have knowledge about CSS architecture need to do a better job of articulating this experience in a way that is accessible and relevant to new developers. If I’m honest about it, the CSS community has failed at this.
But it’s not that simple. There are some non-human factors and context we need to consider as well.
Before the rise of JavaScript components. The strong and obvious naming conventions of BEM gave developers a system to follow that helped avoid specificity clashes in the “terrifying global scope” of CSS. This alone was reason enough for many people to use BEM. You could get good enough results, without necessarily understanding nuanced reasons behind the conventions.
When JavaScript tooling provided a better solution than humans following naming conventions, it opened up UI development to a wider spectrum of developers who previously had less interest in, or reason to focus on style architecture.
Business jumped on the situation. They reasoned it would be cheaper to employ developers who could “do everything” and got what they considered to be adequate results by under-investing in UI specialists. Some developers who’d spent half a career perfecting skills in this area felt threatened. Perhaps some were defensive.
At the same time, developers working in spaces of growth and opportunity could sometimes be dismissive of skills that were not flavour of the month. There was pride, and hype, and reluctance to admit that new tooling and approaches were not always producing better, more re-useable, front-end architecture.
I’ve been consulting in this space for the last 5 years and I’ve seen many different systems for building UIs with component-based architecture. The reality is, whilst some aspects of building large scale JavaScript applications is easier, the promise of better, more re-usable UI components hasn’t been delivered. Front-end architecture is more varied in approach and the results less re-useable than it was before the rise of JavaScript tooling.
Some people might challenge this, but I’ve seen enough examples to consider it an objective truth. What I have seen is:
almost every team has a different approach,
90% of components get zero re-use,
visual changes are often difficult, slow and expensive,
it’s harder than ever to read and understand styles.
Somewhere in the turbulence, we lost the more nuanced reasons behind the naming conventions.
The aim of this was to give context, not blame (it’s a little bit of everybody’s fault). So let’s draw a line, and look at how to apply some lessons from CSS architecture to modern JavaScript applications.
First of all, we need to consider what makes sensible abstractions in UI development. OOCSS, SMACSS and BEM all have a common language when they talk about the different parts of a UI component. I can summarise these as:
Layout,
Design,
UI states, and
Application logic
If re-use or long-term maintainability is important, keeping these concerns separate is beneficial. Yet, this is not typically how teams approach the design of a component libray.
Components can do many things, they might fetch data, they might render HTML, they might call functions to execute business logic and manage application state. Sometimes a single component does all these things. There is usually little distinction around what the responsibility of a component should be. People draw boxes around the visual boundaries of a design and then mix this with application logic. That’s how most components are built. We can do better than that.
BEM gave semantic meaning to classnames and one of the biggest unseen values in this was we could immediately transfer our intentions to other developers across teams and even projects. If you know BEM you can look at a classname, e.g. button--state-success and immediately recognise this as a modifier for a button class.
This kind of semantic meaning is sorely needed for components.
With that in mind, let’s look at different parts of a UI component, identified in CSS architecture methodologies and redefine them in terms of component architecture.
Find a way to distinguish layout components in your application. It might be a comment in the file, a naming convention or the organisation of components in folders… it doesn’t matter. What matters is we need a way to convey our intentions quickly to other developers.
When we have common understanding of what a layout component is we can enforce expectations in code-reviews or linting.
Layout components:
own responsibility for layout and nothing more
should never apply any presentational styles
should not contain any application logic
own responsibility for the layout of child items.
That last point might be a little confusing at first. Why children and not themselves?
In modern CSS there are two parts that contribute to layout:
A parent item, which sets the display mode
A child item, which participates in the parent items display mode.
In other words, we have a grid-container and grid-items, or a flex-container and flex-items. There is always a parent/child relationship.
To get the intended layout, we need the parent item and the child item to work together. Updating one of them independently from the other, will result in a broken layout. We have a word for this, it’s called a dependency.
Despite this dependency, we continue to make these separate components with no direct link. We simply hope that people put them together in the right way and don’t change them. We call that an unmanaged dependency.
The solution is to co-locate the layout concerns with the parent item. There are a number of ways this can happen…
Use the cascade to your advantage and apply a * selector target all immediate children of a layout component.
For example:
.layout { display: flex; }
.layout > * { flex-basis: 50%; }
This works, even with CSS-in-JS and you might be interested to know the * selector doesn’t increase specificity, so it’s easy to override with classic CSS, should you need.
This might seem simple, but it works in most cases.
In the example above I’m ensuring the children of a ul will always be an li. At the same time I’m applying styles for the layout to both the parent and child items. These are managed somewhere in layout component.
The biggest downside of rendering mark-up that wraps child items is you need to pass a list of items that get rendered into specific slots. That’s ok for a simple list, but not ideal for more complicated layouts. As a final escape hatch for complicated components, you can export styles from the parent to be used by a child item.
This allows us to co-locate layout concerns for a particular component.
In the example above Heading still needs to be a child of Page but the dependency between these components is no longer hidden.
By passing just the layout styles between components (not presentation) we’re being explicit about what the dependency is. The Heading component is still responsible for any presentational styles applied to the h1.
Once again we need a way to convey intentions and set expectations about what aspects of UI development presentational components are responsible for. It should be immediately apparent that a component is a presentational component.
Presentational components:
deal with visual design,
contain no application logic,
set no display or positioning properties,
are (mostly) just decorated HTML elements,
should be size agnostic.
Once again, the last point is the least intuitive. Size agnostic means presentational components should fill the space available. Trust in the layout components to set the constraints.
In practical terms this means most UI components have no display, width, height or margin properties.
This is sometimes hard to achieve. Working on presentational components is going to reveal problems or oversights in the layout (or even missing layout components). It feels easier to quickly add a margin to ‘fix’ the presentational component, but by keeping the responsibility for layout with the parent item, presentational components can be re-used in any part of the application.
By adding CSS to presentational components to ‘fix’ layout issues, we are adding hidden dependencies between components. For long term maintainability, it’s far better to fix these problems at the layout level.
I know this is not always simple, so I’m going to give you an escape hatch. However, consider this the !important of CSS-in-JS. Use it when you absolutely must, and use it right. For certian types of components (usually inline-block elements where content is dependent on dynamic data and there is no obvious parent layout component) it sometimes makes sense to add a utility class, or a prop to set a single CSS property. If possible, these should still remain separate from the presentational component and be imported from a utility file. I suggest naming this liabilities.js or debt.js.
Always try to avoid hard coding width and height in presentational components.
Both layout and presentational components have different types of UI state. UI state is different from application state, but the two are often conflated in modern web applications.
From a UI development perspective, the state of a component refers to different display or layout variations that might occur when a user interacts with it.
As a UI developer knowing the number variations and what styles are applied in each case is not only critical, it’s the job description. So why has this has become so hard to know in modern JavaScript applications?
When props are passed to a function that resolves styles, the number of variations can be hard or impossible to verify. This is an example from a real-word application I worked on:
These styles are so difficult to reason about because you have to consider how the application state (props) effects each individual CSS property.
Not only does this make it hard to read, it makes it difficult to test. If the props passed to a style function don’t represent a finite set of UI states, how do we know the current set of resolved values is something intended in the design?
Once again CSS architecture taught us some things about how to manage UI state. SMACSS in particular talked about state and identified three different types of UI state:
Modifier states
Behavioural states
Pseudo states
I’m paraphrasing, because SMACSS was not thinking about components, so let’s revise some of these ideas for modern front-end architecture.
Behavioural states are dependent on application logic. They communicate something about the state of a component to the user. Examples might include success, failure and loading indicators, or the current item in a navigation menu.
Pseudo states are more temporary. They usually map directly to persistent states in the browser, rather than application logic. Typical examples include hover, focus and active but this might also include disabled or selected.
The solution to verifying UI states is to resolve styles down to a set of finite states that can be easily understood.
To ensure this, I start by mapping out UI states into a table:
Modifier state
Behavioural state
Pseudo state
Large
Loading
Hover
Medium
Success
Focus
Small
Error
Disabled
Next consider how these states combine. Typically you only have one modifier and one behavioural state active at any one time.
You can visualise this as a tree:
If you find it’s possible to have two behavioural states active at the same time, split them into different categories.
Modifier state
Network state
Todo state
Pseudo state
Large
Loading
To Do
Hover
Medium
Success
Doing
Focus
Small
Error
Done
Disabled
Warning: If you find you do need this, consider carefully, as it’s often a sign that you have two components pretending to be one.
Because state can be additive, (i.e. behavioural states can change depending on the modifier state), to work out the total number of variations, we multiply the possibilities. With 3 types of state and 3 possibilities for each, there are (3 x 3 x 3), 27 possible variations.
Obviously not every combination matters. The disabled state might look the same for every type of modifier and maybe the pseudo states are the same for all modifiers. We can eliminate duplicate states:
CSS forced us to flatten the state tree and have a single selector for each possible combination of state. Although sometimes tedious, this made us acutely aware of how many variations there were.
JavaScript doesn’t force us to flatten the state tree in any way. The concept of different types of UI state often lost and at worst the value of individual CSS properties depends on the resolution of business logic and data within the “style sheet”.
It doesn’t matter whether you’re composing styles with classnames, template strings or objects in JavaScript. It remains important to have a single representation for each of the possible UI states.
My current favoured approach is to resolve application logic outside the style function, then and pass keys for modifier and behavioural state. Where a behavioural state changes depending on the modifier, I use CSS custom properties to set variations that are later applied in the behaviour.
This allows me to have additive states with a fairly flat and readable representation of each of the styles applied to each variation of a UI component.
The final type of component we need to distinguish is a container component. Many people might already have an understanding of what this means, but from a UI perspective, I’m not referring to any particular design pattern.
From a UI perspective a container component is simply where the application logic is resolved down to a set of modifiers and behavioural keys that are passed to presentational and layout components.
As the responsibilities of front-end developers have become more broad, some might consider the conventions outlined here to be not worth following. I’ve seen teams spend weeks planning the right combination of framework, build tools, workflows and patterns only to give zero consideration to the way they architect UI components. It’s often considered the last step in the process and not worthy of the same level of consideration.
It’s important! I’ve seen well-planned project fail or go well over budget because the UI architecture was poorly planned and became un-maintainable as the project grew.
This disappoints me because the problems are hard and my colleagues and friends who helped establish best practices in CSS are serious engineers, with broad skills, who applied knowledge across disciplines.
Many of the ideas in CSS architecture predate CSS itself and have strong foundations in computer science and software architecture. I know developers who can understand complex architectural problems but fail to see the similarities, or worse yet, choose not to apply this knowledge to declarative front-end code.
I know you can do it. To help, I’ve got some questions to ask yourself when planning UI components.
Questions to ask when planning UI components permalink
How many UI states does this component have?
Can I easily can I validate the current UI state?
How ergonomic are my styles? (i.e. how quickly can someone else read them)
Am I creating sensible UI abstractions? Or am I drawing boxes around the visual design?
Am I separating layout form visual design where possible?
Am I separating application logic from UI components?
Am I communicating my intentions to other developers and giving semantic meaning to components.
Keep practising this and you will build better lasting UIs.
When you develop a website,
there are many things you need to consider. Security is also one of them. With
cyberattacks on websites rising every year, it has become an additional
responsibility of web developers to build a solid foundation for the security
of their websites. SSL is also a part of that foundation. So, if you’re a web
developer, there are certain things you must know about it in order to ensure the
security of your websites. Here we’re going to cover those things for
you, so you can develop websites that are SSL-protected.
SSL and TLS: What’s the difference
First, an important fact:
What we’re talking about is NOT SSL! That’s right – it’s Transfer Layer Security
(TLS) protocol that secures our webpages today. The TLS is a successor to SSL
3.0, and it was adopted way back in 2014 after SSL was found vulnerable to
POODLE attacks. However, SSL continued to be the mainstream term for secure
data transfer protocol of Web, which is why we still talk of SSL instead of
TLS. But as a developer you should keep in mind that although we call it SSL,
we’re referring to TLS.
How SSL works: The TLS Handshake
Now, the next important
thing for a developer to know about SSL is how it works. Basically, it works by
establishing a secure connection known as a “session” between the web browser
of a user (known as “client”) and the host of a website (known as “server”). This
secure connection is established over HTTPS protocol, and here’s how it’s done:
Client Hello: Whenever a user
fires any HTTPS-based URL from his/her web browser, the browser sends a “hello”
message to the web server hosting that URL. This message also includes crucial
details about the capabilities of browser, like the highest SSL version it
supports (i.e. TLS 1.0, TLS 2.0, TLS 3.0 etc), the cipher suites supported by
it, and a random byte string called client random.
Server Hello: Upon receiving
this message, the server sends its own “hello” message to the client. This
message includes information about the cipher suite chosen by the server from
list provided by the client, a session ID, its SSL certificate, the public key
of certificate, and another random string of bytes called server random.
Authentication: The client then
verifies the SSL certificate sent by the server from its Certificate Authority
(CA) to ensure that the communication is happening with the actual owner of a
domain name.
Sending of Premaster
secret: Upon successful verification of SSL certificate, the client sends
another random string of bytes, called a Premaster secret, to the
server. This premaster secret is encrypted by the public key of SSL certificate
that was sent along with the Server Hello message.
Decryption of Premaster
secret: The server decrypts message received from client with its Private key
and extracts the premaster secret.
Creation of
Session keys: Now the server and client both create session keys from client random,
server random and premaster secret. The computation results in same output (aka
same keys) on both sides.
Client ready: Client sends a
“finished” message to server once session keys have been created.
Server ready: Server sends a
“finished” message to client once session keys are created.
Encryption
achieved: That’s it. Secure connection has been achieved now, and webpages
protected with encryption can be transferred now between both client as well as
server.
So that’s how a TLS
Handshake helps in creation of a secure connection between client and server
over HTTPS protocol. Once this connection has been established, both client as
well as server encrypt every file and webpage before it’s sent by them. The
files and webpages are then decrypted by the receiving party (i.e. server or
client) using the session key.
The right time to install an SSL certificate
It’s also important for
every developer to know the right time of installing an SSL certificate. The
SSL certificate should be installed early in the life of a website, usually
before you start uploading your webpages to the server. There’s a solid reason
behind it. If you upload all your webpages and files to the server but install
the SSL certificate later, some of your files (i.e. images, java scripts etc.)
may load through the unsecure HTTP protocol despite your webpages loading on
HTTPS.
Do you know why?
Because your webpages will
have HTTP links referring to those files. And when webpages are requesting a
file over HTTP, it’ll by default load over HTTP instead of HTTPS. It is called
mixed content error. In such a situation, your website may not show the green
padlock of SSL, because some elements will be loading over unsecured HTTP
protocol. When that happens, the browsers may mark your website as ‘Not secure’
despite your website having SSL certificate installed properly. To fix this
thing you’ll have to do the additional work of changing all HTTP references to HTTPS
references in the code of every single webpage that you developed.
Now, if you want to avoid this
problem, the solution is to install your SSL certificate right in the beginning
of site development. If you install it before uploading all the webpages and files
to the web server, the references to all files in your webpages will have HTTPS
links only, thus eliminating the possibility of ‘no padlock’ from all your
webpages.
How to Force SSL: Creating Redirects
Finally, every web
developer should also know how to force a website to load on HTTPS even if
someone tries to access it through the insecure HTTP protocol. That is achieved
by creating redirects, and you should learn how to create the necessary
redirects in your web server to force the loading of your website on
SSL-protected HTTPS protocol. Generally, the process looks something like this
(for example in cPanel):
Login to the
cPanel (or any other Control panel, depending on your host) of your website;
Navigate to the Redirects
section either by clicking an icon or searching for it;
Create a
permanent (301) redirect for your HTTP homepage (i.e. http://xyz.com/) to HTTPS
homepage (i.e. https://xyz.com/);
Save the
changes.
That’s it. The process may
vary slightly for different cPanel, but it’s roughly the same. You may have to research
a little more to get guide for cPanel.
The pricing of SSL certificate varies
If you’re going to purchase
the certificate yourself, it’s also crucial to know that the pricing of SSL
certificates varies between CAs and vendors. For example, a RapidSSL Wildcard certificate may
cost more or less than a DigiCert Wildcard SSL certificate. Therefore, you
should also do a comparison of pricing before choosing an SSL certificate for the
site which you’re developing.
Conclusion
So that’s what every web
developer should know about SSL certificates. This information can come handy
when you develop your next site as it can save you from doing any sort of
mistakes while deploying SSL certificate. And when you do less mistakes, you’ll
save both time as well as money. So, keep all these things in the back of your
mind, and keep developing!
If you’re old enough to remember the early days of web design, you know the aesthetics of our digital experiences have changed considerably. What was once a ‘wild west’ of icons, buttons, images, and navs is now a space of mostly organized, functional, and familiar websites. By and large this has benefited businesses and users alike; a more efficient, elegant, more predictable digital landscape makes it easier for companies and people to accomplish their goals. But with that evolution of digital design has come a new quandary: why does everything look the same and what does that say about the changing role of designers?
In this episode of Wireframe, host and Adobe Principal Designer Khoi Vinh, along with podcast producer Amy Standen, took a trip back in time to see just how we got here. What have we gained as modern digital design principles have evolved, and what have we lost in the process? At first, it might seem strange to explore the visual topic of ‘aesthetic monoculture’ in the audio format, but as Khoi reveals, it provided a rare opportunity to ask some of the big questions in design.
“Podcasting may seem like exactly the exact wrong medium to discuss this, because you can’t show anything. But at the same time, this gave us a good opportunity to not just focus on the aesthetics of the things that we’re talking about and ask the bigger questions…It’s not about saying that I want things to look a certain way, but rather that we should think about the way things look,” he said. Listen to the episode to hear more about the emergence of aesthetic monoculture, and read on for a behind-the-scenes look at why this is an especially important conversation right now.
We’ve learned to speak the same design language, so it’s no wonder we’re using the same words
Cliff Kuang, UX design lead at Google. Image source: Medium/Cliff Kuang.
In the early days of the web as we know it, web designers were very much ‘making it up as they went along.’ They turned to the offline world for inspiration (clicking a loudspeaker icon to play a sound file, for example). In evolving our design practices, designers created a digital visual language of their own.
“[Early web designers] were bringing an unconscious culture of aesthetics, reference points, influences, and all this kind of stuff that they were carrying with them, right? Which I think is radically different today,” Cliff Kuang, UX design lead at Google and the author of the upcoming book User Friendly, told Khoi and Amy.
Jessica Helfand, designer, writer, Yale University professor, and managing editor of Design Observer.
“We’re all more visually sophisticated. Our appetite for beautiful things or better things or more well functional things is better.” Jessica Helfand, a designer and writer who teaches at Yale University, added, stressing that the public, and their perceptions, have changed alongside designers themselves.
“It’s a fantastic time to be a designer but then you ask the question that brings us here today, why does everything look the same. I think it’s related.”
For producer Amy, this struck a particular nerve — that this same kind of design trend is visible offline too. “I think the aesthetic monoculture of the web mirrors what we see on the streets. Quirky, site-specific, very local-feeling businesses are being replaced by the stylized, minimalist coffee shops and chain stores that look the same no matter what city you’re in. Personally, I’d prefer a world with more weirdness and less slick consumerism. Ditto the internet,” she said.
While Khoi stresses this aesthetic monoculture is not necessarily a bad thing (after all, it has made many corners of the web more functional and accessible), he says there is a risk to designers themselves in becoming purveyors of this ‘sameness’ online.
“I think the risk is that design becomes this discipline that only supplies the parts, so to speak. And doesn’t become a discipline that truly takes into account the wellbeing of users, truly thinks about the full journey of these customers, and really takes responsibility for what it does. So in short, I think the risk is that we sell ourselves short in terms of how we can influence outcomes.”
But in order to move forward, it’s not just about replicating the past, he said. Rather, it’s about designers pushing the boundaries of some of what is currently deemed ‘best practices’ in design.
In defense of aesthetic monoculture: Is it just about approachability and simplicity?
Emily Heyward is co-founder of design and branding agency Red Antler.
Khoi and Amy also spoke with Emily Heyward, one of the founders of design and branding agency Red Antler, which has created many web presences for clients that clearly harness the visual and UX conventions that make up the current aesthetic monoculture. For her, these trends are rooted in her client demands, and a perception that these design principles will make for better experiences for everyone.
“A lot of the design choices that accompanied this wave are about approachability, simplicity, just making it easy for people. We’re just going to tell you what you need to know, and we’re not going to have a bunch of bursts and swirls and all sorts of ways to try to catch your attention with a shiny object. You know, we’re just here to serve you,” she said.
A screenshot of Red Antler’s website. The company has done branding and created web experiences for many popular brands. Most now aim to be as approachable and simple as possible, to make it easy for potential customers to access information on products.
Amy, on the other hand, sees this as a corporatization of web design. “There are consequences, both social and aesthetic, of a web that’s increasingly dominated by big corporations trying to sell us stuff. Aesthetically, it means less individualism, more of whatever the current visual trend may be,” she said.
But Emily and the team at Red Antler also see the value in pushing back against this ‘sameness’ that many of their clients crave — searching for new approaches to design instead and gently pushing their clients in these directions. Red Antler is deeply invested in recruiting young designers, fresh out of school, because they haven’t yet “learned the rules” of the modern design aesthetic, and this makes them much better at pushing boundaries.
App UIs across the digital world are favoring similar design aesthetics, optimized to be as easy-to-use as possible. Image credit: A Fast Company article exploring the ‘sameness’ popping up in apps (and why that’s not necessarily a bad thing).
“We’ve taken our eye off the ball”
Reflecting on the episode, Khoi said he has gained a better understanding of how we got to this point in design. Meanwhile, over the past decades, designers have gotten better and better at their craft; they’ve mastered new technologies, platforms, and have become better than ever at doing business. Designers have gained a lot, but what is being lost?
“We’ve gotten better at all of these complementary aspects of the core act of designing,” he said.
“But we’ve taken our eye off the ball in terms of making sure that the good design that we’re doing is actually good, and it’s uniquely contributing to the overall world that we’re creating…It’s just so much sameness out there, and so little questioning of what the impacts of that might be. The silver lining though is that it’s a tremendous opportunity for new, unique voices to break through and reset the table when it comes to what the internet can look like.”
Behind Wireframe is a blog series taking you behind the scenes of Wireframe, Adobe’s design podcast, hosted by Principal Designer Khoi Vinh. Click here to listen or subscribe to Wireframe, and follow along every week as uncover more.
I think about and enjoy very boring CSS stuff—probably much more than I should do, to be honest. One thing that I’ve probably spent too much time thinking about over the years, is CSS resets.
In this modern era of web development, we don’t really need a heavy-handed reset, or even a reset at all, because CSS browser compatibility issues are much less likely than they were in the old IE 6 days. That era was when resets such as normalize.css came about and saved us all heaps of hell. Those days are gone now and we can trust our browsers to behave more, so I think resets like that are probably mostly redundant.
I still like to reset stuff, so I’ve been slowly and continually tinkering with a reset myself over the years in an obsessive code golf manner. I’ll explain what’s in there and why, but before I do that, here it is in its entirety:
We start with box-sizing. I just flat out reset all elements and pseudo-elements to use box-sizing: border-box.
*, *::before, *::after{ box-sizing: border-box; }
Some people think that pseudo-elements should inherit box sizing, but I think that’s silly. If you want to use a different box-sizing value, set it explicitly—at least that’s what I do, anyway. I wrote about box-sizing more over on CSS From Scratch.
After box-sizing, I do a blanket reset of margin and padding, where it gets set by the browser styles. This is all pretty self-explanatory, so I won’t get into it too much.
I will mention the situation with lists, though. I select only lists that do have a class attribute because if a plain ol’
or gets used, I want it to look like a list. A lot of resets, including my previous ones, aggressively remove that.
Next up: body styles. I keep this really simple. It’s useful for the to fill the viewport, even when empty, so I do that by setting the min-height to 100vh. I also like smooth anchor scrolling, so I set scroll-behavior: smooth, too.
I only set two text styles. I set the line-height to be 1.5 because the default 1.2 just isn’t big enough to have accessible, readable text. I also set text-rendering to optimizeSpeed. Using optimizeLegibility makes your text look nicer, but can have serious performance issues such as 30 second loading delays, so I try to avoid that now. I do sometimes add it to sections of microcopy though.
ul[class], ol[class]{ list-style: none; }
Just like the margin and padding rules, I only reset list-style where a list element has a class attribute.
a:not([class]){ text-decoration-skip-ink: auto; }
For links without a class attribute, I set text-decoration-skip-ink: auto so that the underline renders in a much more readable fashion. This could be set on links globally, but it’s caused one or two conflicts in the past for me, so I keep it like this.
img{ max-width: 100%; display: block; }
Good ol’ fluid image styles come next. I set images to be a block element because frankly, life is too short for that weird gap you get at the bottom, and realistically, images—especially with work I do—tend to behave like blocks.
article > * *{ margin-top: 1em; }
I really like this CSS trick and I’ve finally been brave enough to add it to the reset. The lobotomised owl selector targets direct descendants of an article and adds 1em of top margin to them. This gives a solid rhythm to flow content. I actually use a .flow utility in every project now. You can read more about it on 24 Ways. In fact, I reckon it’s my most used CSS these days.
input, button, textarea, select{ font: inherit; }
Another thing I’ve finally been brave enough to set as default is font: inherit on input elements, which as a shorthand, does exactly what it says on the tin. No more tiny (mono, in some cases) text!
Last, and by no means least, is a single @media query that resets animations, transitions and scroll behaviour if the user prefers reduced motion. I like this in the reset, with specificity trumping!important selectors, because most likely now, if a user doesn’t want motion, they won’t get it, regardless of the CSS that follows this reset.
ℹ️ Update: Thanks to @atomiks, this has been updated so it doesn’t break JavaScript events watching for animationend and transitionend.
Serving the right code to the right browsers can be tricky. Here are some options.
Serving modern code to modern browsers can be great for performance. Your JavaScript bundles can contain more compact or optimized modern syntax, while still supporting older browsers.
The tooling ecosystem has consolidated on using the module/nomodule pattern for declaratively loading modern VS legacy code, which provides browsers with both sources and lets them decide which to use:
What can we do? We want to deliver two compile targets depending on the browser, but a couple older browsers don’t quite support the nice clean syntax for doing so.
First, there’s the Safari Fix. Safari 10.1 supports JS Modules not the nomodule attribute on scripts, which causes it to execute both the modern and legacy code (yikes!). Thankfully, Sam found a way to use a non-standard beforeload event supported in Safari 10 & 11 to polyfill nomodule.
Option 1: Load Dynamically
We can circumvent these issues by implementing a tiny script loader, similar to how LoadCSS works. Instead of relying on browsers to implement both ES Modules and the nomodule attribute, we can attempt to execute a Module script as a “litmus test”, then use the result of that test to choose whether to load modern or legacy code.
However, this solution requires waiting until our first “litmus test” module script has run before it can inject the correct script. This is because is always asynchronous. There is a better way!
A standalone variant of this can be implemented by checking if the browser supports nomodule. This would mean browsers like Safari 10.1 are treated as legacy even though they support Modules, but that might be a good thing. Here's the code for that:
var s = document.createElement('script')
if ('noModule' in s) { // notice the casing
s.type = 'module'
s.src = "http://www.webdesignernews.com/modern.js"
}
else
s.src = '/legacy.js'
}
document.head.appendChild(s)
This can be quickly rolled into a function that loads modern or legacy code, and also ensures both are loaded asynchronously:
What's the trade-off?preloading.
The trouble with this solution is that, because it's completely dynamic, the browser won't be able to discover our JavaScript resources until it runs the bootstrapping code we wrote to inject modern vs legacy scripts. Normally, browsers scan HTML as it is being streamed to look for resources they can preload. There's a solution, though it's not perfect: we can use to preload the modern version of a bundle in modern browsers. Unfortunately, only Chrome supports modulepreload so far.
Whether this technique works for you can come down to the size of the HTML document you're embedding those scripts into. If your HTML payload is as small as a splash screen or just enough to bootstrap a client-side application, giving up the preload scanner is less likely to impact performance. If you are server-rendering a lot of meaningful HTML for the browser to stream, the preload scanner is your friend and this might not be the best approach for you.
Here's what this solution might look like in prod:
It's also be pointed out that the set of browsers supporting JS Modules is quite similar to those that support. For some websites, it might make sense to use rather than relying on modulepreload. This may have performance drawbacks, since classic script preloading doesn't spread parsing work out over time as well as modulepreload.
Option 2: User Agent Sniffing
I don't have a concise code sample for this since User Agent detection is nontrivial, but there's a great Smashing Magazine article about it.
Essentially, this technique starts with the same in the HTML for all browsers. When bundle.js is requested, the server parses the requesting browser's User Agent string and chooses whether to return modern or legacy JavaScript, depending on whether that browser is recognized as modern or not.
While this approach is versatile, it comes with some severe implications:
since server smarts are required, this doesn't work for static deployment (static site generators, Netlify, etc)
caching for those JavaScript URLs now varies based on User Agent, which is highly volatile
UA detection is difficult and can be prone to false classification
the User Agent string is easily spoofable and new UA's arrive daily
One way to address these limitations is to combine the module/nomodule pattern with User Agent differentiation in order to avoid sending multiple bundle versions in the first place. This approach still reduces cacheability of the page, but allows for effective preloading, since the server generating our HTML knows whether to use modulepreload or preload.
function renderPage(request, response) {
let html = `...`;
const agent = request.headers.userAgent;
const isModern = userAgent.isModern(agent);
if (isModern) {
html = `
`;
} else {
html = `
`;
}
response.end(html);
}
For websites already generating HTML on the server in response to each request, this can be an effective solution for modern script loading.
Option 3: Penalize older browsers
The ill-effects of the module/nomodule pattern are seen in old versions of Chrome, Firefox and Safari - browser versions with very limited usage, since users are automatically updated to the latest version. This doesn't hold true for Edge 16-18, but there is hope: new versions of Edge will use a Chromium-based renderer that doesn't suffer from this issue.
It might be perfectly reasonable for some applications to accept this as a trade-off: you get to deliver modern code to 90% of browsers, at the expense of some extra bandwidth on older browsers. Notably, none of the User Agents suffering from this over-fetching issue have significant mobile market share - so those bytes are less likely to be coming from an expensive mobile plan or through a device with a slow processor.
If you're building a site where your users are primarily on mobile or recent browsers, the simplest form of the module/nomodule pattern will work for the vast majority of your users. Just be sure to include the Safari 10.1 fix if you have usage from slightly older iOS devices.
Option 4: Use conditional bundles
One clever approach here is to use nomodule to conditionally load bundles containing code that isn't needed in modern browsers, such as polyfills. With this approach, the worst-case is that the polyfills are loaded or possibly even executed (in Safari 10.1), but the effect is limited to "over-polyfilling". Given that the current prevailing approach is to load and execute polyfills in all browsers, this can be a net improvement.
Angular CLI can be configured to use this approach for polyfills, as demonstrated by Minko Gechev. After reading about this approach, I realized we could switch the automatic polyfill injection in preact-cli to use it - this PR shows how easy it can be to adopt the technique.
For those using Webpack, there's a handy plugin for html-webpack-plugin that makes it easy to add nomodule to polyfill bundles.
What should you do?
The answer depends on your use-case. If you're building a client-side application and your app's HTML payload is little more than a , you might find Option 1 to be compelling. If you're building a server-rendered website and can afford the caching impact, Option 2 could be for you. If you're using universal rendering, the performance benefits offered by preload scanning might be very important, and you look to Option 3 or Option 4. Choose what fits your architecture.
Personally, I tend to make the decision to optimize for faster parse times on mobile rather than the download cost on some desktop browsers. Mobile users experience parsing and data costs as actual expenses - battery drain and data fees - whereas desktop users don't tend to have these constraints. Plus, it's optimizing for the 90% - for the stuff I work on, most users are on modern and/or mobile browsers.
Further Reading
Interested in diving deeper into this space? Here's some places to start digging:
Modern frontend web development is a polarizing experience: many love it, others despise it.
I am a huge fan of modern web development, though I would describe it as “magical” — and magic has its upsides and downsides:
When you understand how to use the magical tools of web development (babel! bundlers! watchers! etc!), your development workflow is speedy, powerful, and delightful
If you don’t understand the magical tools of web development, it’s terribly confusing
… and trying to learn how the magic works is all-too-often miserable, unless you have someone to help guide you through the tangle of jargon, hot takes, and outdated information on the web
Recently I’ve been needing to explain “modern web development workflows” to folks who only have a cursory of vanilla web development workflows and……
It is a LOT to explain!
Even a hasty explanation ends up being pretty long.
So in the effort of writing more of my explanations down, here is the beginning of a long yet hasty explanation of the evolution of web development:
Part 1: How we got from static websites to babel
– – – – – – – – – – –
Simplest website: Static website
Let’s start from “classic” frontend web development, which I’m going to assume you-the-reader already understand.
In classic frontend web development, we are directly modifying HTML/CSS/JavaScript files. To preview changes, we open the HTML file locally in the browser, and as we develop, we refresh the page for updates.
Development workflow
The development workflow looks like this:
Edit your HTML/CSS/JavaScript files in a text editor like Atom.
Save the file in your text editor.
Open and reload file in the browser.
Edit JavaScript, save file, refresh the page to see updates
Deployment
Then when you want to publish your website to the internet, you simply upload the HTML/CSS/JavaScript files to the internet somewhere.
With a service like Netlify, you can just drag-and-drop the folder containing your files to publish the page to the web.
That’s so simple! Why did we make things complicated?!
So if you understand how the “classic” web development workflow works, you might ask: Gee, that’s really simple and convenient. Why did we ever deviate from that?! Why are modern web development flows so complicated?
The short answer: …Ok maybe I have two short answers.
Two short answers:
You don’t have to make it more complicated. The “classic” web development workflow is great! And is perfectly sufficient for plenty of needs! You should never add superfluous tooling, or tools whose purpose you don’t understand.
But for certain projects you’ll benefit from a more sophisticated workflow. Every tool that you add to your workflow is meant to solve a problem.
In order to understand the tooling for modern web development, we have to understand the problems of web development.
In this long-but-hasty journey, we’ll address each problem individually, starting with an old web dev problem that has existed for decades:.
An old problem: Limitations in JavaScript
Up until fairly recently, JavaScript and the Web APIs had a lot of limitations (for a myriad of reasons that will not be covered in this long ‘n’ hasty post).
To name a few of these limitations:
No modules
No constants
No Promises / async
No Array.includes() (!!)
Clunky syntax / missing for a lot of common primitives (no for-of, template literals, arrow function syntax, template unpacking…)
(Web APIs) Countless DOM operations were needlessly complex (like adding/removing class names, hiding elements, selecting elements, removing elements…)
Browsers are only capable of executing JavaScript, so when there are limitations in the JavaScript language, it’s not like you can just use a different language; you have to work with what you have.
Aside: Difference between JavaScript and Web APIs?
You may have noticed I said “JavaScript and the Web APIs” above. These are two different things!
When you write JavaScript for a web page, any API call that interacts with the web page itself is a Web API (which happens to be written in JavaScript), and not part of JavaScript the language.
Some examples:
Web APIs: document and every method on document; window and every method on window; Event, XMLHttpRequest, fetch, etc.
So for instance, if you’re writing a Node.js server, you’ll be writing in JavaScript, so that means you can use e.g. Promises but you can’t use document.querySelector (nor would it make sense to do that).
An old solution: jQuery & friends
Back in 2006, jQuery was released: It’s a library that helped workaround lot of the shortcomings of JavaScript and the Web APIs.
jQuery includes APIs that help dramatically with common web tasks, like DOM manipulations, async processing, cross-browser discrepancies and resource-fetching.
So basically: All these things were technically possible using old-JavaScript/old-Web-APIS, but they were super annoying, tedious, and often tricky to code – so instead of having every web developer write the same tedious code to e.g. download and process and JSON file, you could instead download the jQuery library and use jQuery’s nice APIs instead.
A new solution: Let’s improve JavaScript itself
A lot of time has passed since 2006, though!
Since 2006, JavaScript and the Web APIs have improved tremendously. (With a lot of help from jQuery and others in paving the way!)
JavaScript is an ever-evolving language. Similar to how software is updated, the JavaScript language itself is updated to different versions.
You may have heard the term “ES6.” ES6 stands for “ECMAScript 6,” and refers to the 6th iteration of ECMAScript. ECMAScript is just another word for JavaScript — the only difference is a colloquial one, in that people usually use “ECMAScript” to refer to the specification itself, and “JavaScript” to refer to the language people code in.
(Btw, that’s another confusing aside and pet peeve of mine: JavaScript is not an implementation/flavor/dialect of ECMAScript; that’s like calling “HTML” an implementation/flavor/dialect of the “HTML,” or, if you’re generous, the “HTML spec.” Either way, it’s wrong! Wikipedia, you’re wrong! JavaScript and ECMAScript are one and the same.)
Anyway! ES6 (released in 2015) is notable because it adds a lot of really language nice features to JavaScript, like const, modules, and Promises. (And ES8 introduced maybe my favorite language feature ever, async.)
In parallel, the Web APIs have also improved tremendously since 2006, like with the addition of document.querySelector, fetch, and little things like classList and hidden.
So instead of using jQuery or other similar libraries, in 2019 we can, for the most part, just use JavaScript and the Web APIs directly.
… sort of!
A new-old problem: Cross-browser support
When there’s an update to the JavaScript language, browsers will also need to be updated to support the new language features. (Same is true for the Web APIs, but we’ll stick to just JavaScript for simplicity for now.)
However, there’s a delay between:
When the language feature is defined in JavaScript
When the browsers have all implemented and shipped support for that feature
When users have all upgraded to the latest version of their browser, usually via auto-updating/restarting your browser (and this sometimes doesn’t happen!).
The dilemma: Do we write using older JavaScript or the latest JavaScript? Both have pros and cons. (This particular code example lifted from here!)
This causes a dilemma for JavaScript developers: We want to use modern JavaScript language features, because these improvements often make it much easier to code certain things. But we also want our websites to work for all users, regardless of when’s the last time they’ve restarted their browser to get the latest updates.
This specific dilemma is commonly solved by Babel.
Babel is a JavaScript compiler that transforms JavaScript code into … different JavaScript code! Specifically, it transforms JavaScript code written using the latest version of JavaScript into the equivalent code written using an older version JavaScript that’s supported on far more browser.
With Babel, we can enjoy the benefits of writing in the latest JavaScript without having to worry about browser compatibility.
Web developers incorporate Babel into their workflow so that they can write the code using the latest JavaScript features without having to worry about browser compatibility.
Aside: Babel doesn’t include Web APIs
For example if you use fetch in your JavaScript, babel will not provide fallback support (this is called “polyfill“-ing) because fetch is a Web API and not part of JavaScript proper. (This decision is being reconsidered.)
So you’ll need a separate solution for polyfilling Web APIs! But we’ll get to that in a later post.
* * *
Back to workflows: Static website babel
OK, so we’ve now motivated why one might want to use babel. What does a web development workflow with babel look like?
The following is the simplest babel workflow, which people don’t usually use. (That’s because a bundler like Parcel or webpack is more convenient, but we’ll get there another next time!)
Setup
Install* babel (*You can follow the CLI instructions here, though it assumes you understand how npm works. And they recommends that you install babel locally as an npm dev dependency for each project, vs globally on your machine.)
Example: The src directory is where your vanilla JavaScript lives
Deployment
When you’re ready to publish your website to the internet, you do NOT want to upload your vanilla JavaScript files to the web, because you’ve been using JavaScript features that are not supported by all browsers.
Instead, you want to:
1. Compile your JavaScript using babel, to get browser-compatible code:
This will create the new, compiled JavaScript file in a separate folder:
Example: Babel will generate a second “script.js”, and this one has cross-browser-compatible code
2. Upload the compiled JavaScript to the internet, along with your HTML and CSS:
Your website will* look and behave the same as in development mode, but users will be served the compiled, babel-fied JavaScript.
(*hopefully! Sometimes there are differences in Debug vs Release builds, but those are bugs!)
Pause to point out: Development vs Release code!
Notice that we now have a separation between “development” code and “release” code:
Development code: The code that you write while developing your web app.
Release code: The code that you want to run when users visit your web app.
We purposely want to keep these things separate, because:
The development code is good for developers, but bad for users
The release code is good for users, but bad for developers
In frontend web development, not everyone will uses or needs to use babel.
However! The general pattern of:
Writing development code that does not get shown to users
and is instead compiled into different release code, that should be shown to users
…is not just common, but is often expected in modern frontend web development.
(Note that having a separate “Debug” vs “Release” build is a general pattern in software engineering, not something new with web development. But it’s especially pertinent to frontend web development, both because of how commonplace it is, and because of how big the difference can be between Debug/Release for frontend web development in particular.)
A short list of frontend technologies that expect this separation between debug and release:
npm modules
Any CSS preprocessor
React/Vue/Angular/any web framework
This is going to be a recurring pattern, so make note of it now!
Next time: npm and bundling
In the next part of our journey, we’ll explore npm modules (what are they and why) and bundling (what is it and why), and how that complicates the workflow.
Marketing has historically focused on what the company needs—like better sales numbers or add-on sales. While sales are important and necessary, your customers don’t care. And if they sense at all that your messaging isn’t genuine and authentic, they’ll find someone else to buy from.
Many companies are turning to agile marketing because decisions on campaigns and messaging are influenced by customer feedback instead of executives.
By focusing our marketing strategy on real reactions from customers, we create campaigns that better resonate with them.
Turning your marketing ‘outside in’
When marketers are really far removed from their customers, their messages tend to feel very corporate, stuffy and don’t resonate with potential buyers.
A few months ago, I was working with a marketing team whose company was struggling financially. When asking about their marketing goals, they couldn’t get past ‘Sell X number of widgets.’
The selling at all costs mindset was so engraved in their minds that it was extremely difficult to turn it around and look at marketing from a customer-centric lens.
So instead of being able to really get creative and listen to what their customers wanted to hear, they were stuck in the old way of working, which I call “inside out” marketing versus “outside in” marketing.
To begin changing this mindset, we began changing the conversation from “We need to sell to you” to “What customer problem are we trying to solve?”
Writing customer stories to change perspective
After working with this team at changing their goals from “inside out” to ‘”outside in,” we began breaking down their work in a similar way with customers stories, often referred to as user stories in traditional agile practices. I like to call them “customer stories” in agile marketing since we are looking at potential customers rather than users of a system or tool.
Their work was very task based in the beginning and it looked something like this:
Write press release
Send out email
Write blog post
To become more customer centric, we began looking at the work by asking three questions:
Who is the customer?
What are you delivering?
Why does the customer care?
So we went from a task of “send out email” to a customer-centric viewpoint like this:
“As a restaurant owner, I want to understand how a point-of-sale system can save time so that my customers don’t have to wait in line as long.”
This simple technique can help marketers revolve their message around their customer, rather than themselves.
Creating feedback loops
Agile marketing is all about quickly getting customer feedback to inform your marketing decisions. This isn’t saying that you don’t have a plan – in fact, there’s a ton of planning that happens with agile marketing. The difference is having a flexible plan to make adjustments as you learn from your customers.
In traditional marketing, we get into detailed planning assuming we are right. With agile marketing, the goal is to learn from our customers and to make adjustments so we can get better and better at meeting their needs.
With the old way of marketing, we don’t have feedback loops anywhere. It isn’t until the end of the campaign that we learn if we hit the mark, and by then it’s too late!
A content marketing team wanted to produce a downloadable online training course as a lead magnet for their training company.
They used to plan out every module of the course and launch it with a big bang. But they were missing a key ingredient – customer feedback!
When they went to an agile approach they began testing each module with their customers. In the first week, they put together a YouTube video on the topic and gauged its interest. They then used the comments to inform them of other topics they hadn’t thought about.
They used this cycle of producing bite-sized chunks, learning from customers’ feedback and building on that to create a really awesome lead magnet.
In a world where customers have a lot of choices on where to buy products and services, it’s more important than ever that our marketing messages are authentic and add value. With agile marketing, you can become more customer-centric in your approach by changing your focus outward and getting frequent customer feedback.
Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.
About The Author
document.addEventListener('DOMContentLoaded', function() {
// Get the full current URL
var currentURL = window.location.href;
// Check if the path includes /wp-login.php or /wp-admin/
if (currentURL.indexOf('/wp-login.php') === -1 && currentURL.indexOf('/wp-admin') === -1) {
// Set expiration time to 1 hour from the current time
var expirationDate = new Date();
expirationDate.setTime(expirationDate.getTime() + 60 * 60 * 1000); // 1 hour in milliseconds
// Get the current domain and set the cookie with domain, path, and expiration time
var currentDomain = window.location.hostname;
document.cookie = 'ppc_last_visited_page=' + currentURL + '; path=/; domain=' + currentDomain + '; expires=' + expirationDate.toUTCString();
}
});