why-i-quit-using-google

So I was recently asked why I prefer to use free and open source software over more conventional and popular proprietary software and services.

A few years ago I was an avid Google user. I was deeply embedded in the Google ecosystem and used their products everywhere. I used Gmail for email, Google Calendar and Contacts for PIM, YouTube for entertainment, Google Newsstand for news, Android for mobile, and Chrome as my web browser.

I would upload all of my family photos to Google Photos and all of my personal documents to Google Drive (which were all in Google Docs format). I used Google Domains to register my domain names for websites where I would keep track of my users using Google Analytics and monetize them using Google AdSense.

I used Google Hangouts (one of Google’s previous messaging plays) to communicate with friends and family and Google Wallet (with debit card) to buy things online and in-store.

My home is covered with Google Homes (1 in my office, 1 in my bedroom, 1 in the main living area) which I would use to play music on my Google Play Music subscription and podcasts from Google Podcasts.

I have easily invested thousands of dollars into my Google account to buy movies, TV shows, apps, and Google hardware devices. This was truly the Google life.

Then one day, I received an email from Google that changed everything.

“Your account has been suspended”

Just the thing you want to wake up to in the morning. An email from Google saying that your account has been suspended due to a perceived Terms of Use violation. No prior warning. No appeals process. No number to call. Trying to sign in to your Google account yields an error and all of your connected devices are signed out. All of your Google data, your photos, emails, contacts, calendars, purchased movies and TV shows. All gone.

I nearly had a heart attack, until I saw that the Google account that had been suspended was in fact not my main personal Google account, but a throwaway Gmail account that I created years prior for a project. I hadn’t touched the other account since creation and forgot it existed. Apparently my personal Gmail was listed as the recovery address for the throwaway account and that’s why I received the termination email.

Although I was able to breathe a sigh of relief this time, the email was wake up call. I was forced to critically reevaluate my dependence on a single company for all the tech products and services in my life.

I found myself to be a frog in a heating pot of water and I made the decision that I was going to jump out.

Leaving Google

Today there are plenty of lists on the internet providing alternatives to Google services such as this and this. Although the “DeGoogle” movement was still in its infancy when I was making the move.

The first Google service I decided to drop was Gmail, the heart of my online identity. I migrated to Fastmail with my own domain in case I needed to move again (hint: glad I did, now I self host my email). Fastmail also provided calendar and contacts solutions so that took care of leaving Google Calendar and Contacts.

Here are some other alternatives that I moved to:

Migrating away from Google was not a fast or easy process. It took years to get where I am now and there are still several Google services that I depend on: YouTube and Google Home.

Eventually, my Google Home’s will grow old and become unsupported at which point hopefully the Mycroft devices have matured and become available for purchase. YouTube may never be replaced (although I do hope for projects like PeerTube to succeed) but I find the compromise of using only one or two Google services to be acceptable.

At this point losing my Google account due to a mistake in their machine learning would largely be inconsequential and my focus has shifted to leaving Amazon which I use for most of my shopping and cloud services.

The reason that I moved to mostly FOSS applications is that it seems to be the only software ecosystem where everything works seamlessly together and I don’t have to cede control to any single company. Alternatively I could have simply split my service usage up evenly across Google, Microsoft, Amazon, and Apple but I don’t feel that they would have worked as nicely together.

Overall I’m very happy with the open source ecosystem. I use Ubuntu with KDE on all of my computers and Android (no GApps) on my mobile phone. I’ve ordered the PinePhone “Brave Heart” and hope to one day be able to use it or one of its successors as a daily driver with Ubuntu Touch or Plasma Mobile.

I don’t want to give the impression that I exclusively use open source software either, I do use a number of proprietary apps including: Sublime Text, Typora, and Cloudron.

how-to-start-a-blog-using-hugo

Hugo is a great tool to start a blog.

I use Hugo myself in this blog, since more than 2 years. I have a few reasons for loving using Hugo.

It is simple, boring, flexible, fast.

The main reason is that it is simple. There’s not much you have to learn to get started.

You write content in Markdown, a format that lets me use my favorite editor (Bear) to write posts.

Hugo is boring. Don’t get me wrong, this is a very positive thing. As a developer I am tempted to tweak things here and there all the time. There’s no fancy technology underlying Hugo. It’s built using Go, one of the languages I love the most, but that does not mean I want to dive into the internals of Hugo and change how it works.

And it does not surface any cool or next-generation stuff like many JavaScript frameworks tend to do.

Hence it is boring, which gives me a lot of time to do what is really useful when working on a blog: writing content. I focus on the content, not on the content container.

That said, Hugo is pretty darn flexible. I started my own blog with an open source theme, then changed it completely over time. Sometimes I want to do things in my website that are out of the scope of a simple blog, and Hugo allows me to create those things.

Finally, another reason I love Hugo is that it is fast. Why? First, it has Go at the core, which is known to be a very fast language. And in the Go ecosystem, there’s no concept of 100 megabytes dependencies. Things are made to be as fast as possible. Plus, Hugo does not need to do some of the fancy stuff that is needed when using fancy technology. This is a by-product of being boring.

Anyway, enough with words.

Hugo is amazing, especially if you are a developer and you’re willing to write in Markdown. Non-tech people might just refuse to use Markdown, and it’s perfectly understandable.

Also, you have to be prepared for a Git-centric workflow to make things really click.

What you do is that you write a post using Markdown, then commit your changes to a Git repository, most commonly on GitHub, and some glue technology deploys the changes on the server that hosts the site.

Hosting a Hugo website

A Hugo blog is completely static. This means you don’t need to host your own server, or use a special service for it.

Netlify, Now and GitHub Pages are 3 great places where you can host a Hugo blog, for free.

The only cost is the one you have to sustain for the domain name. I can’t stress more the importance of having your own domain name. No .github.io or .netlify.com or .now.sh sites, please.

My own Hugo sites are hosted on Netlify.

Choose a domain

Put your blog under your own domain. Pick one. Use your own name. And use .com or .blog. Don’t try to be clever by using a localized domain, don’t use .io. .com just gives a better impression and it’s reusable for all your future projects, not just to host your blog. I picked that.

Oh and if you have an old domain lying around, just use that. Why? The older your domain is, the better.

Note on subdomains: every subdomain, to Google, is a different website. So if your domain is flaviocopes.com, and you create your blog in blog.flaviocopes.com, then that’s a completely new website to Google, and it will have its own ranking separate from the main domain.

My suggestion is to avoid subdomains completely.

Install Hugo

To install Hugo on macOS, from your terminal run

brew install hugo

The brew command does not exist on your Mac? Check the Homebrew guide

For Windows and Linux, check the official installation guide.

Create a Hugo site

Once Hugo is installed, you can create a Hugo site by running

hugo new site myblog

I suggest to run this into a www folder in your Home directory, because the command will create a new myblog folder where you run it.

Pick a theme

Now before you can start you need to pick a theme. I wish Hugo included a default theme to make things straightforward, but it does not.

There’s a lot of choice on https://themes.gohugo.io. My personal recommendation is to start with https://themes.gohugo.io/ghostwriter/ and tweak it later.

I also recommend to avoid the git clone workflow they suggest on that page, because you’ll be surely tweaking the theme in the future, and I find it best to have a single repository for both content and theme. It simplifies deployment.

So, go to https://github.com/jbub/ghostwriter/archive/master.zip to download the current version of the theme.

Then unpackage it in the themes/ghostwriter folder in your newly created Hugo website:

Notice there is a exampleSite folder in the themes/ghostwriter. Open it, and open its content subfolder. In there, you can see the page, post and project subfolders.

Copy page and post in the content folder of the site:

The configuration

The sample data also provide a sample config.toml file in themes/ghostwriter/exampleSite/config.toml. This is the Hugo configuration file, which tells Hugo some details of the configuration without you having to hardcode information in the theme.

I recommend you to not copy that, because it has too many things, and instead use this:

baseurl = "/"
title = "My blog"
theme = "ghostwriter"

[Params]
    mainSections = ["post"]
    intro = true
    headline = "My headline"
    description = "My description"
    github = "https://github.com/XXX"
    twitter = "https://twitter.com/XXX"
    email = "XXX@example.com"
    opengraph = true
    shareTwitter = true
    dateFormat = "Mon, Jan 2, 2006"

[Permalinks]
    post = "http://www.webdesignernews.com/:filename/"

You can freely customize the information in this file, later.

Now from the command line, run:

hugo serve

Open http://localhost:1313 in your browser, and you should be able to see the site live!

This is the site home page.

There is a list of posts that is taken from the content/post folder of your website:

Click the first, called “Creating a New Theme”:

You can open the file content/post/creating-a-new-theme.md to change anything in the post.

If you save, the website will automatically update with the new content.

This is pretty awesome, right?

You can create a new post by creating a new .md file, prefixing it with anything you want. You can use incremental numbers, if you prefer. Or use a date.

If something is not looking like you want, you can open the themes/ghostwriter/layouts folder and tweak it.

The “post” template is defined in themes/ghostwriter/layouts/post/single.html:

Hugo uses Go templates. The syntax can be pretty unfamiliar but the Hugo website does a very good job at explaining them in this Go templates introduction.

However, try to not look at customizing your template now.

If you want to tweak the colors, add a

Focus on the content instead.

Remove the existing files, and write 2-3 posts to start with.

It’s too easy to get trapped in making things perfectly like you want, but the important thing is the content.

And the cleaner your site is, the better for your readers.

Let me now write a little about deployment.

Deploy the Hugo site to Netlify

I want to showcase how to deploy a Hugo site in 2 of the services I enjoy the most: Netlify and Now.

First, I’m going to create a GitHub repository to host the site.

I open GitHub Desktop, an app I use every day and is part of my workflow. It’s the simplest way to use Git.

From the File menu, I pressed the “New Repository” option:

The same screen can be generated by simply dragging the myblog folder into the app.

I gave the myblog name to the repository, and picked the correct path for the repo.

The process automatically makes the first commit:

Now we can click the “Publish repository” button to push the repo to GitHub:

You can keep the repo private, of course.

Once the repo is in GitHub:

we can move to Netlify.

From my Netlify dashboard I pressed the “New site from Git” button:

Pressed GitHub, authorized Netlify to access my private repositories, then I picked the repo I just created:

Netlify automatically identified it as a Hugo repo, and entered the build command automatically:

Clicking “Deploy site” starts the deploy process:

On a real site, I would set up a custom domain. Netlify has the option to purchase a domain through them, and it’s a very (VERY) straightforward process. I highly recommend it. The site can be live in just a few minutes from purchasing the domain.

A random .netlify.com subdomain is attached to the site, in this case pedantic-engelbart-500c9a.netlify.com, and HTTPS is automatically enabled.

We can therefore immediately see the site live:

Now if you try to edit something in your local version, you just push the changes to GitHub, and Netlify will automatically update the site. You can see it building the site in the “Overview” panel of the site:

To know more about Netlify I recommend you check out my Netlify tutorial.

Deploy the Hugo site to Zeit Now

Another awesome platform you can use for your Hugo blog is Zeit Now.

Once you sign up, from the dashboard you press the New Project button.

The first time you deploy from GitHub you have to first install the GitHub app clicking “Install Now For GitHub”:

This brings you to the GitHub page for the app, where you can authorize it for all your repos, or just for some:

Once you get back, click the “New Project From GitHub” button:

Select the project and click “Import”:

In the meantime, go into the main folder of mysite and add a package.json file with this content:

{
  "scripts": {
    "build": "hugo"
  }
}

This tells Now how to deploy the site.

When you get back to the dashboard, the new deploy should start soon, and you will see the site working live:

Note that in Now you have 3 URLs you can use to access the site:

  • myblog.flaviocopes.now.sh
  • myblog-alpha-swart.now.sh
  • myblog-git-master.flaviocopes.now.sh

You can choose the one you prefer.

Plus, each deployment has its own URL, too. In this case I had myblog-h8xks5jhn.now.sh but it changes with every deployment.

And of course you can add your domain, too. Zeit has a great service to purchase your domain directly from them, available at https://zeit.co/domains.

And if you prefer working with the command line, the now command lets you purchase domains from there, as well.

I highly recommend you check my Zeit Now tutorial.


I am writing a book on how to create a successful blog as a developer. I will be released late February.

Signup to my newsletter
to get 2/3 free chapters in your inbox in the next few weeks!

using-contact-level-intent-monitoring-to-close-3-critical-performance-gaps-in-abm

At the same time that the concept of customer experience seems to be attracting more and more thought leadership press, the stark reality is that in many B2B companies, three critical revenue-maximization efforts remain demonstrably sub-optimized. For ABM practitioners, closing these performance gaps promises to deliver additional improvements to the bottom line.

Part of the problem is that we’re mostly using systems conceived a long time ago. Part of it is due to processes that have been slow to evolve. And part is caused by organizational constructs that have not yet been able to take advantage of clear performance improvement opportunities.

As we head into 2020, I’m super excited by the innovative companies bringing together newly available data insights with the willingness to do things differently. They are leveraging insights from their own first-party data as well as data available from companies like ours (full disclosure) as well as other third-party providers that bring buyer signals and new levels of insight into opportunities and activity like Clari, Idio, G2, HG Insights, LinkedIn and more. They’re addressing long-standing issues in both lead and opportunity management. And they’re beginning to innovate in post-sales areas as well, thereby unlocking potential for critical improvements in cross-sell up-sell yields. Each of these three areas is benefitting from important improvements in demand visibility made possible with new sources of behavioral insight.

Closing the lead-to-opportunity gap

There are two huge pieces missing from our legacy lead-and-opportunity management models – both with multiple components. The first is that leads, as people, are incredibly poor stand-ins from what is actually going on in an account. If you have a known active account, but no leads – a “false negative” – instead of de-prioritizing the account, it means you should work even harder to engage. If you have lots of activity by a single individual, while that certainly tells you something about their particular interests in your content, it actually conveys very little information about the account as a whole. More often than anyone wants to admit, these leads are often “false positives,” the kind of activity that makes salespeople wary of marketing-generated leads altogether. We need to change the criteria used for alerting sales. We need to make it more buying-team based. By showing the actual people engaged on topics relevant to what you sell, high-quality behavioral data can unmask what’s really going on in an account. Right now, leading purchase intent data-based practitioners are currently doing exactly this to prioritize accounts and the readily identifiable buying teams within them.

Yet even when a buying team can be clearly identified inside an account, our legacy lead management processes provide no simple way of retaining that knowledge and passing it along to sales in a timely fashion. If these people aren’t engaged, they can’t “qualify” as leads in the legacy process. And so, at best, they start receiving nurture emails via marketing automation – leaving sales still uninformed of a clear opportunity that’s aging with every passing day. To solve this problem, innovative practitioners are starting to port contact-level intent data right into conversational marketing tools like Conversica, to fill the “non-lead” process gap, obtain engagement and deliver faster clarity to sales. And it’s working! But capturing a single opportunity in a single account is only the tip of the iceberg.

Achieving real account potential demands parallel processing

While the quota system in sales is useful for focusing sellers’ energies on where they’ve already seen success, most companies have yet to balance this against what an account’s potential really is. Admittedly, some accounts are too small to buy more than one of what you sell. And some are in no position to buy an add-on or more of the same any time soon. But then there are a whole slew of accounts where there really are a lot of other potential buying centers for either the same product or a different one your company also offers. (Think about different divisions, different geographies, maybe even entirely different use-cases.)

In most companies, “opportunity creation” in the CRM system is entirely in the hands of the individual sellers. There is no mechanism for documenting the presence of an opportunity without seller involvement. Marketing and sales teams are now using intent and behavioral data to highlight additional active buying centers – for more of the same product or to cross-sell other offerings – inside accounts. They are then using this information to collaborate more closely with sales management to make sure that these potential “at bats” don’t get lost in gaps between how marketing sees demand and how sales teams are deployed and incentivized. 

Accelerating opportunities all the way to renewal

In many B2B industries, sellers and buyers alike have been moving steadily towards subscription models. And while this has led to a wonderful rise in customer success practices, little change has filtered back into the opportunity management process itself. Exactly how your deals tend to progress is a critical input for sales managers working on how to optimize deal velocity and closure rates. But the rich information about how sellers win and why they might be losing remains both fragmented in the CRM system or across multiple systems. The problem is especially obvious if we look at the “opportunity object” within CRM.

According to industry experts, the typical “opportunity” in a CRM system references only one person’s name at the prospect account. If captured at all, the actual working status of the “opportunity” is often delegated entirely to the salesperson. It’s their word as to whether it’s at 20% or 60% or somewhere else in between. (For an in-depth look at better ways of doing this, check out GSP).

Leading practitioners have begun to use contact information embedded in third-party purchase intent data to populate key players into the opportunity itself. In this way, sales management and sellers are able to have richer coaching discussions about the actual people on whom a deal’s outcome is likely to depend. This is a great start, but there’s much more to do.

In a technology business, if that first deal process closes with a win, all eyes will immediately move on to the renewal. Despite this shift in focus, and especially if the customer success team is on its own system, much of the insight gained in the original deal process will not be transferred to the Customer Success team. An antidote to this information gap can be created by combining account and contact information from within the CRM and the third-party purchase intent monitoring source directly into the customer success system. In this way, customer success will be able to both look back at what went on with key players in the deal and stay abreast of on-going activity in the account that has bearing on how they should manage the relationship going forward.

Kick-starting account revenue maximization

When a single deal flows into customer success, most of the account handling energy will be dedicated to driving renewal with that team of buyers. At the same time, some significant portion should be reserved for managing the experience of those who were neutral or even detractors in that initial deal. High-quality purchase intent data can contain competitive information critical to this kind of perception management effort. And that’s not only important for the immediate deal in-hand, but this approach also offers significant potential for improving progress with the account as a whole. Here’s how:

In a revenue-maximization strategy, even as customer success is working through onboarding a new set of users, the energized sales team is re-focusing its own attention onto the next opportunity (and the one after that) at the account. Their success at maximizing your company’s share of additional deals will partly depend on how well the combined team succeeds in understanding and managing the needs and sentiments of those who are already getting to know you first hand.

A truly virtuous ABM circle

We’re now truly on the cusp of completing a data-driven virtuous circle that rolls out continuously, starting from better identifying demand, to better managing it, to better leveraging it going forward. New high-quality sources of real purchase intent data deliver an important piece of the solution. The remaining elements involve changes to how we manage and communicate behavioral information across our organizations, processes and actions. I look forward to seeing huge progress in closing these performance gaps across the coming months.


Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.



About The Author

John Steinert is the CMO of TechTarget, where he helps bring the power of purchase intent-driven marketing and sales services to technology companies. Having spent most of his career in B2B and tech, John has earned a notable reputation by helping build business for global leaders like Dell, IBM, Pitney Bowes and SAP – as well as for fast-growth, emerging players. He’s passionate about quality content, continuously improving processes and driving meaningful business results.



understanding-and-using-rem-units-in-css

CSS units have been the subject of several articles here on SitePoint (such as A Look at Length Units in CSS, The New CSS3 Relative Font Sizing Units, and The Power of em Units in CSS). In this article, we increase the count by having an in-depth look at rem units, which have excellent browser support and a polyfill if you need support for old IE.

This article was updated in December, 2019 to reflect the current state of rem unit sizing with CSS. For more on CSS font and text properties, read our book, CSS Master, 2nd Edition.

What Are rem Units?

You might have encountered the term “R.E.M.” before while listening to the radio or your music player. Unlike their musical counterparts, named for the “Rapid Eye Movement” during deep sleep, in CSS rem stands for “root em”. They won’t make you lose your religion nor believe in a man on the moon. What they can do is help you achieve a harmonious and balanced design.

According to the W3C spec the definition for one rem unit is:

Equal to the computed value of font-size on the root element. When specified on the font-size property of the root element, the rem units refer to the property’s initial value.

This means that 1rem equals the font size of the html element (which for most browsers has a default value of 16px).

Rem Units vs. Em Units

The main problem with em units is that they are relative to the font size of their own element. As such they can cascade and cause unexpected results. Let’s consider the following example, where we want lists to have a font size of 12px, in the case where the root font size is the default 16px:

html {
font-size: 100%;
}

ul {
font-size: 0.75em;
}

If we have a list nested inside another list, the font size of the inner list will be 75% of the size of its parent (in this case 9px). We can still overcome this problem by using something along these lines:

ul ul {
font-size: 1em;
}

This does the trick, however we still have to pay a lot of attention to situations where nesting gets even deeper.

With rem units, things are a simpler:

html {
font-size: 100%;
}

ul {
font-size: 0.75rem;
}

As all the sizes are referenced from the root font size, there is no more need to cover the nesting cases in separate declarations.

Font Sizing with Rem Units

One of the pioneers of using rem units for font sizing is Jonathan Snook with his Font sizing with REM article, back in May, 2011. Like many other CSS developers, he had to face the problems that em units bring in complex layouts.

At that time, older versions of IE still had large market shares and they were unable to zoom text that was sized with pixels. However, as we saw earlier, it is very easy to lose track of nesting and get unexpected results with em units.

The main issue with using rem for font sizing is that the values are somewhat difficult to use. Let’s see an example of some common font sizes expressed in rem units, assuming, of course, that the base size is 16px:

  • 10px = 0.625rem
  • 12px = 0.75rem
  • 14px = 0.875rem
  • 16px = 1rem (base)
  • 18px = 1.125rem
  • 20px = 1.25rem
  • 24px = 1.5rem
  • 30px = 1.875rem
  • 32px = 2rem

As we can see, these values are not very convenient for making calculations. For this reason, Snook used a trick called “62.5%“. It was not a new discovery, by any means, as it was already used with em units:

body { font-size:62.5%; } /* =10px */
h1 { font-size: 2.4em; } /* =24px */
p { font-size: 1.4em; } /* =14px */
li { font-size: 1.4em; } /* =14px? */

As rem units are relative to the root element, Snook’s variant of the solution becomes:

html { font-size: 62.5%; } /* =10px */
body { font-size: 1.4rem; } /* =14px */
h1 { font-size: 2.4rem; } /* =24px */

One also had to take into account the other browsers that didn’t support rem. Thus the code from above would have actually been written this way:

html {
font-size: 62.5%;
}

body {
font-size: 14px;
font-size: 1.4rem;
}

h1 {
font-size: 24px;
font-size: 2.4rem;
}

While this solution seems to be close to the status of a “golden rule”, there are people who advise against using it blindingly. Harry Roberts writes his own take on the use of rem units. In his opinion, while the 62.5% solution makes calculation easier (as the font sizes in px are 10 times their rem values), it ends up forcing developers to explicitly rewrite all the font sizes in their website.

A third view comes from Chris Coyier of CSS-Tricks. His solution makes use of all three units we encountered so far. He keeps the root size defined in px, modules defined with rem units, and elements inside modules sized with em. This approach makes easier to manipulate global size, which scales the type in the modules, while the module content is scaled based on the module font size itself. Louis Lazaris discussed that latter concept in The Power of em Units in CSS.

In the example below you can see how Chris’s approach would look:

See the Pen One Method for Using ems and rems in CSS by SitePoint (@SitePoint) on CodePen.

In practice, there are major frameworks such as Bootstrap 4 and the Material Design guidelines that use rem units for sizing text content.

A special mention goes to Material-UI, a very popular collection of React components. Not only are they sizing text the same way, but also offer a mechanism to implement the “10px simplification” we mentioned above.

Another recent project, Every Layout, combines em and rem units in a very inspired way. It comes closest to Chris Coyier’s model outline earlier and it uses em units to emphasize inline elements like SVG icons, spans or other similar elements.

As you can see, there is no “silver bullet” solution. The combinations possible are limited only by the imagination of the developers.

The use of em or rem units inside media queries is closely related to the notion of “optimal line length” and how it influences the reading experience. In September 2014, Smashing Magazine published a comprehensive study on web typography called Size Matters: Balancing Line Length And Font Size In Responsive Web Design. Among many other interesting things, the articles gives an estimate for optimal line length: between 45 and 75-85 characters (including spaces and punctuation), with 65 the “ideal” target value.

Using a rough estimate of 1rem = 1character, we can control the flow of text for a single column of content, in a mobile-first approach:

.container {
width: 100%;
}

@media (min-width: 85rem) {
.container {
width: 65rem;
}
}

There is, however, one interesting detail about rem and em units when used as units for media queries: they always keep the same value of 1rem = 1em = browser-set font size. The reason for this behavior is explained in the media query spec (emphasis added):

Relative units in media queries are based on the initial value, which means that units are never based on results of declarations. For example, in HTML, the em unit is relative to the initial value of font-size, defined by the user agent or the user’s preferences, not any styling on the page.

Let’s see a quick example of this behavior:

View Media Query Demo on CodePen

First, in our HTML, we have a element where we will write the width of the viewport:

Document width: px

Next we have two media queries, one with rem units and the other with em units (this uses Sass for simplicity):

html {
font-size: 62.5%; /* 62.5% of 16px = 10px */

@media (min-width: 20rem) {
/* 20*16px = 320px */
background-color: lemonchiffon;
font-size: 200%;
/* 200% of 16px = 32px */
}

@media (min-width: 30em) {
/* 30*16px = 480px */
background-color: lightblue;
font-size: 300%; /* 300% of 16px = 48px */
}
}

Finally, we use a bit of jQuery to display the viewport width on the page, updating the value when the window size changes:

$('span').text($(window).width());

$(window).on('resize', function(e) {
$('span').text($(window).width());
});

We begin with the 62.5% trick to show that the modified root font size does not have any effect on the values used for the media queries. As we change the width of the browser window we can see that the first media query kicks in at 320px (20 × 16px) while the second one becomes active at 480px (30 × 16px). None of the font-size changes we declared had any effect on the breakpoints. The only way to change the media query breakpoint values is to modify the default font size in the browser settings.

For this reason, it doesn’t really matter if we use em or rem units for media query breakpoints. Zurb Foundation (currently at v6.5.3 at the moment this was written) makes use of em units in the media queries.

The Quest for Accessibility

We’ve seen above that the ability to scale based on the root font size makes rem units very useful for accessibility. Google developers make the recommendation to use relative units for text sizing.

There is an empirical study run by the people behind the Internet Archive showing that there is a significant amount of users who change their default font size in the browser settings. By using rem and other relative units you respect the users’ decisions about the way they want to browse the web.

Using rem Units for Scaling Documents

A third use we can find for rem units is to build scalable components. By expressing widths, margins, and padding in rem units, it becomes possible to create an interface that grows or shrinks in tune with the root font size. Let’s see how this thing works using a couple of examples.

Using rem Units for Scaling Documents Demo #1

In this first example, we change the root font size using media queries. Just like in the previous section, the purpose is to customize the reading experience for the device used. As element padding values and margins are expressed using rem, the entire component scales with the device size.

Let’s see another:

See the Pen Dynamic Sizing of Modules with Rem Units by SitePoint (@SitePoint) on CodePen.

In the second example we do the same alteration using JavaScript. This time the user has control over the size of the interface, adjusting it to fit his needs. Add a way to store these custom values (using either a database, cookies or local storage) and you have the base of a personalization system based on user preferences.

Conclusion

We end here our encounter with CSS rem units. It is obvious that there are many advantages in using these units in our code, like responsiveness, scalability, improved reading experience, and greater flexibility in defining components. Rem units not a universal silver bullet solution but, with careful deployment, they can solve many problems that have irked developers for years. It’s up to each one of us to unlock the full potential of rems. Start your editors, experiment and share your results with the rest of us.

Make the most of your text by understanding what more you could do with text CSS values. In our screencast AtoZ CSS: Text and Typography, we’ll explore text properties, font properties, and more.

It’s not The End of the World as We Know It, rather it’s yet another path on the journey that makes front-end developers “Shiny Happy People“.

Adrian is a front-end developer and UX enthusiast based in Dublin, Ireland. He’s focused on CSS architecture, responsive web design and website performance. He believes in giving back to the community, being involved in running three local meetups and he started speaking at international events. In the off time, he loves playing video games and tinker with custom PC builds.

pro-tip:-why-you-should-stop-using-last-click-attribution-in-google-ads

When was the last time you searched for something, clicked an ad and purchased immediately?

Probably never. That’s why it’s time for marketers to stop using last-click attribution for measuring success in Google Ads.

Often, people are searching on multiple devices and do extensive browsing and research before making a purchase. Understanding the impact that higher-funnel keywords have on conversions can help better utilize spend, cut out waste and inform other digital marketing channels such as SEO and social media.

As Google continues to push automated bidding strategies like maximize conversions, maximize conversion value and target ROAS (just to name a few), using non-last click (NLC) attribution becomes even more important.

These algorithms are designed to optimize ad spend based on specific criteria, but if they’re only seeing a small slice of the pie, you could be missing out on valuable traffic and giving too much credit to lower-funnel searches, like brand terms.

The Model Comparison Tool report in Google Analytics looks at historical data and gives estimates for how many conversions you would have had if you leveraged a different model. This tool can help you decide which model (Position-Based, Linear, or Time Decay) aligns best with your user flow.

Making the change to NLC is simple:

  1. Log into Google Ads
  2. Navigate to the Conversions tab, then click on the conversion action you want to change
  3. Click “edit settings” and then choose whichever attribution model is right for you

If you have multiple conversions being counted, make sure to change all conversions you’re tracking in the account.

So, how much does NLC improve performance? For one lead generation client in a competitive industry, we saw conversions increase 16% and CPA decreased by 12% just one month after switching to linear attribution. On the e-commerce side, one account saw a 5% increase in ROAS just 2 weeks after switching from last-click to position-based.

Note that if you’re using smart bidding strategies, moving to NLC could disrupt your campaigns for a few weeks while the algorithm adjusts, so be patient if things don’t improve immediately.

You want data that paints a full picture, and last-click attribution simply won’t cut it anymore. By opting for the right non-last click model instead, you can set yourself up for a huge performance spike.

Pro Tip is a special feature for marketers in our community to share a specific tactic others can use to elevate their performance. You can submit your own here.


Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.



About The Author

Justine is a Lead Strategist at HawkSEM and has experience managing PPC and social media campaigns as well as executing SEO strategies.



using-bootstrap-to-create-material-design-web-apps

Difficulty:BeginnerLength:MediumLanguages:

Google’s Material Design is ubiquitous in modern mobile apps. Perhaps it’s because most people today have come to love its bold colors, subtle shadows, and minimalist layouts. Wouldn’t it be great if you could easily apply the same design language to your websites and offer visitors a user experience they’re are well accustomed to? Well, with MDBootstrap, you can.

MDBootstrap, also known as Material Design for Bootstrap 4, is an opensource UI kit that allows you to use Bootstrap 4, a CSS framework you might already be familiar with, to create fully responsive websites that have a Material Design look and feel. It comes with over 500 components, dozens of animations, and support for several JavaScript frameworks, including jQuery, Vue, and React.

In this step-by-step tutorial, I’ll show you how to add the MDBootstrap UI kit to your web projects and use some of its components.

Or, if you want to get started right away with a professional Bootstrap theme, check out some of our ready-to-go templates.

1. Setup

MDBootstrap is available on cdnjs, and several other CDNs. Therefore, you don’t need to download it to your computer to be able to use it. But adding it to a web page—along with all its dependencies—does only take a few minutes.

Start by creating a new HTML document and opening it using your favorite text editor. Then add the following HTML5 boilerplate code to it:



  
    
    My Page
  
  
  
  

The MDBootstrap UI kit consists of just two minified files: mdb.min.css and mdb.min.js. It does, however, depend on Bootstrap, jQuery, and Font Awesome to provide several features.

So, inside the head tag of the HTML5 document, add the following link tags:



Next, towards the end of the body of the document, add the following script tags:



At this point, the web page is ready to display Material Design components.

2. Creating a Header

The first component of a Material Design web page is usually a header. It acts as a container for the navigation bar, in which you can not only display your company’s logo and name, but also add links to other important pages of your website. In the Material Design spec, the navigation bar is often referred to as the top app bar.

To create a header, all you need to do is use the header tag. Creating a navigation bar, however, is a little more involved.

First, you must create a nav tag and assign the navbar class to it. This creates a basic navigation bar with a white background. If you want to give it a color from the Material palette, you can use one of the many color classes available. They have intuitive names such as purple, red, and blue-grey.

Inside the tag, you can then use the navbar-brand class while specifying your company’s name or logo.

Note that when you’re using dark colors for the navigation bar, you should add the navbar-dark class to it to ensure that the text inside is readable.

Including links to other pages of your website is as easy as creating an unordered list having the navbar-nav class, with its items having the nav-item class.

In the above code, the ml-auto class pushes the links to the opposite end of the navigation bar.

If you try looking at the web page in a browser now, you should see a header that looks like this:

A purple navigation bar

3. Using the Grid

To add actual content to the web page, you’ll want to use the responsive grid system that Bootstrap offers. For the sake of a realistic example, let’s add two cards to the page, placed in a single row having two columns.

Start by creating a div element with the container class. It will serve as a container for all the rows and columns we add to this document. Inside it you can create rows and columns using the row  and col-md classes. Because all of this is going to be the main content of the page, it’s a good idea to wrap it in a main tag.

The col-md class ensures that both the columns have the same width and fit inside the row on screens whose width is at least 768 px. To target smaller or larger screens, feel free to experiment with the col-sm and col-lg classes.

You can now create cards inside both the columns using the card class. With MDBootstrap, your cards can have images, titles, buttons, and text. Here’s the code for a sample card that has all of them:

Cherry tomatoes to get costlier

With a no-deal Brexit, you're likely to pay 10% more for cherry tomatoes next month.

More

Similarly, go ahead and add another card to the page, this time in the second column. For best results, I suggest you use images that have the same dimensions.

Raw fruits and vegetables for breakfast?

Raw fruits and vegetables that have been thinly sliced are great way to start your day.

More

As you may have noticed, the kit has intuitively-named classes, such as card-title and card-text, that help you quickly style the contents of your cards. Similarly, the btn and btn-primary classes help you give Material styles to your buttons.

With all the above changes, your web page should look like this:

Page displaying two cards

4. Creating a Form

Material Design forms have a very distinct look and feel. The design language goes into exhaustive detail about what each form element should look like, when it should be used, and where it should be placed.

MDBootstrap has styles for several HTML5 form elements. By using them, you can be sure that your forms conform to most of the guidelines of Material Design.

Let us now create a simple form your visitors can use to sign up for a newsletter. It shall have two text fields, one for a name and one for an email address. Additionally, it shall have a submit button.

The form will need its own row and column, so you must create them first. Because it’s alone, the column will stretch to fill the entire row by default. By qualifying the col-md class with a number, and by using the offset-md class, you can control the size and the position of the column in the row.

In the above code, the mt-4 and mb-4 classes give the row appropriate top and bottom margins.

Inside the column, create another card. It’ll serve as a container for the form and all the text associated with it. Optionally, you can use the card-header class to give a header to the the card, and thus the form too.

Subscribe to us

To create the form, all you need is the form tag. But you must remember to add the form-control class to each text field you add to the form. If you have a label associated with it, you must also wrap them both inside a div  element whose class is md-form. The following code shows you how:

Stay updated and get the latest information about all our offers and discounts right into your inbox.

Here’s what the form should look like now:

The completed form

Conclusion

You now know how to create simple web pages using the Material Design for Bootstrap 4 UI kit. In this introductory tutorial, you learned how to use several important components offered by the kit, such as navigation bars, cards, and form controls. You also learned the basics of positioning the components using Bootstrap 4’s grid system.

To know more about MDBootstrap, do refer to the official documentation

Unlike SASS variables, CSS Custom Properties can be modified in class selectors, allowing you to create abstractions and reduce the size of your CSS.

Let me show you an example!

In the CodyHouse Framework, we use the .grid-gap-{size} utility classes to set the spacing among grid items:

.grid {
  display: flex;
  flex-wrap: wrap;

  > * {
    flex-basis: 100%;
  }
}

.grid-gap-xxxs {
  margin-bottom: calc(-1 * var(--space-xxxs));
  margin-left: calc(-1 * var(--space-xxxs));

  > * {
    margin-bottom: var(--space-xxxs);
    margin-left: calc(var(--space-xxxs));
  }
}

.grid-gap-xxs {
  margin-bottom: calc(-1 * var(--space-xxs));
  margin-left: calc(-1 * var(--space-xxs));

  > * {
    margin-bottom: var(--space-xxs);
    margin-left: calc(var(--space-xxs));
  }
}

.grid-gap-xs {
  margin-bottom: calc(-1 * var(--space-xs));
  margin-left: calc(-1 * var(--space-xs));

  > * {
    margin-bottom: var(--space-xs);
    margin-left: calc(var(--space-xs));
  }
}

.grid-gap-sm {
  margin-bottom: calc(-1 * var(--space-sm));
  margin-left: calc(-1 * var(--space-sm));

  > * {
    margin-bottom: var(--space-sm);
    margin-left: calc(var(--space-sm));
  }
}

.grid-gap-md {
  margin-bottom: calc(-1 * var(--space-md));
  margin-left: calc(-1 * var(--space-md));

  > * {
    margin-bottom: var(--space-md);
    margin-left: calc(var(--space-md));
  }
}

.grid-gap-lg {
  margin-bottom: calc(-1 * var(--space-lg));
  margin-left: calc(-1 * var(--space-lg));

  > * {
    margin-bottom: var(--space-lg);
    margin-left: calc(var(--space-lg));
  }
}

.grid-gap-xl {
  margin-bottom: calc(-1 * var(--space-xl));
  margin-left: calc(-1 * var(--space-xl));

  > * {
    margin-bottom: var(--space-xl);
    margin-left: calc(var(--space-xl));
  }
}

.grid-gap-xxl {
  margin-bottom: calc(-1 * var(--space-xxl));
  margin-left: calc(-1 * var(--space-xxl));

  > * {
    margin-bottom: var(--space-xxl);
    margin-left: calc(var(--space-xxl));
  }
}

.grid-gap-xxxl {
  margin-bottom: calc(-1 * var(--space-xxxl));
  margin-left: calc(-1 * var(--space-xxxl));

  > * {
    margin-bottom: var(--space-xxxl);
    margin-left: calc(var(--space-xxxl));
  }
}

Because there’s a lot of repetition, a few weeks ago, we’ve decided to use CSS custom properties to simplify these classes.

The first step was creating an abstraction that contains the code we repeat for each utility class:

[class*="grid-gap"] {
  margin-bottom: calc(-1 * var(--grid-gap, 1em));
  margin-left: calc(-1 * var(--grid-gap, 1em));

  > * { 
    margin-bottom: var(--grid-gap, 1em); 
    margin-left: var(--grid-gap, 1em);
  }
}

This attribute selector looks for classes that contain the “grid-gap” string (all the grid-gap utility classes). Note that we’ve replaced the --space-unit variables with a new --grid-gap variable.

In our .grid class, we set the --grid-gap variable equal to 0 (default value).

.grid {
  --grid-gap: 0px;
  display: flex;
  flex-wrap: wrap;

  > * {
    flex-basis: 100%;
  }
}

Now we can modify, for example, the .grid-gap-xxxxs class as the following:

.grid-gap-xxxxs { --grid-gap: var(--space-xxxxs); }

We no longer need all the chunk of code about margins; we can just modify the value of the --grid-gap variable.

If we do the same for all the utility classes, we end up with the following:

.grid {
  --grid-gap: 0px;
  display: flex;
  flex-wrap: wrap;

  > * {
    flex-basis: 100%;
  }
}

[class*="grid-gap"] {
  margin-bottom: calc(-1 * var(--grid-gap, 1em));
  margin-left: calc(-1 * var(--grid-gap, 1em));

  > * { 
    margin-bottom: var(--grid-gap, 1em); 
    margin-left: var(--grid-gap, 1em);
  }
}

.grid-gap-xxxxs { --grid-gap: var(--space-xxxxs); }
.grid-gap-xxxs  { --grid-gap: var(--space-xxxs); }
.grid-gap-xxs   { --grid-gap: var(--space-xxs); }
.grid-gap-xs    { --grid-gap: var(--space-xs); }
.grid-gap-sm    { --grid-gap: var(--space-sm); }
.grid-gap-md    { --grid-gap: var(--space-md); }
.grid-gap-lg    { --grid-gap: var(--space-lg); }
.grid-gap-xl    { --grid-gap: var(--space-xl); }
.grid-gap-xxl   { --grid-gap: var(--space-xxl); }
.grid-gap-xxxl  { --grid-gap: var(--space-xxxl); }
.grid-gap-xxxxl { --grid-gap: var(--space-xxxxl); }

This optimization reduces the CSS size of this example by more than half! Obviously, you can apply this technique only when it’s possible to abstract some rules (I’m not suggesting that CSS custom properties will reduce by half the size of your CSS). But still, this tutorial is just an example of code optimization made possible by using CSS variables.

ps: CSS Variables are supported in all modern browsers.

If you’re interested in learning more about our grid system, download our framework or check the Grid and Layout page of the docs.

Be the first to know when we ship something cool!

The other day I was listening to the latest Shop Talk Show podcast while out for my morning run. They had Jen Simmons on the show, talking about (among other things) the standards process. It’s a great conversation, as you would expect—Jen’s super smart and has been doing great, meaningful work for a long time.

Among the things that she discussed is the care that has to go into new browser features because once shipped, it’s there for good. Most of us don’t have to worry about things to that same level because:

…you can always delete all your code later. You can always say, ‘Oh, this thing I shipped quickly for my project, that was a bad idea. Let’s obliterate it and re-do it better later.’ But with the web, there’s two things. One, we don’t get to change it and ship it again later, almost ever. If we hurry up and we ship subgrid, and subgrid is crappy, there’s no, like, fixing it. We’re stuck with it.

This permanence to the web has always been one of the web’s characteristics that astounds me the most. It’s why you can load up sites today on a Newton, and they’ll just work. That’s in such sharp contrast to, well, everything I can think of. Devices aren’t built like that. Products in general, digital or otherwise, are rarely built like that. Native platforms aren’t built like that. That commitment to not breaking what has been created is simply incredible.

Jen’s other point, too, is an important one to remember:

…And the other thing is that we’re not solving for one website. We’re not solving for facebook.com, or youtube.com, or codepen.io or for…whatever. We’re solving for the entire web and every use case ever all at the same time.

She gives an example, later on, discussing how even something seemingly simple, underlines, becomes so much more intense when you need to solve for everyone:

Well, what about these languages that are typeset vertically? What is the typography in Japan? What’s needed for this kind of script that is completely different than the Latin alphabet? And there’s a long conversation about that and then, ‘Wow, we’re shipping something that actually works for all the languages and all the scripts around the world.’ Or it almost does and there’s a few pieces missing but we’re dedicated to going ahead and finishing those pieces as soon as we can.

There’s a lot of thought and consideration that goes into deciding what makes its way into this incredible platform and what doesn’t.

Another person I have a ton of respect for and who has been doing incredibly important work for a long time is Alex Russell. In particular, he’s put an absurd amount of time and energy into advocating for being careful about the overreliance on JavaScript that is leading to much of the web’s current performance issues.

I thought about Jen’s comments when I saw one person stating that Alex was trying to “sell you on fairy tales of Use the Platform”.

I don’t want to single that person out because I’m not here to encourage a pile-on, but also because they’re hardly the first person to express that general sentiment. But, that statement about the “fairy tale of Use the Platform” has really stuck with me, because it feels…wrong.

So much care and planning has gone into creating the web platform, to ensure that even as new features are added, they’re added in a way that doesn’t break the web for anyone using an older device or browser. Can you say the same for any framework out there? I don’t mean that to be perceived as throwing shade (as the kids say). Building the actual web platform requires a deeper level of commitment to these sorts of things out of necessity.

And as some frameworks are, just now, considering how they scale and grow to different geographies with different constraints and languages, the web platform has been building with that in mind for years. The standards process feels so difficult to many of us because of the incredible amount of minutiae that becomes critical. That security issue that might maybe be a problem? Maybe you feel comfortable taking that risk but when you’re creating something that everyone, everywhere is going to use, it becomes a valid reason for not shipping.

People talk a lot about the web being accessible or performant by default, and while it’s not perfect, it’s also not that far from being true. Creating the platform means you have to prioritize these things.

If you care at all about reaching people outside of the little bubbles we all live in, using the platform can’t be a fairy tale: it has to be the foundation for everything that we build.

That doesn’t mean that foundation is enough, or always right.

Are there limitations? Absolutely! There’s a reason why we still have a standards body, 26 years or so after HTML was first specified: because the work isn’t done and never (knock on wood) will be. (It’s also why I find it very encouraging that folks like Nicole Sullivan are hard at work to identifying some of the things we need frameworks for that should probably be in the browser instead.)

The web thrives on a healthy tension between stability and the chaos of experimentation. It’s perfectly fine, and necessary at times, to use tools to augment issues and limitations we may have on the web. I have no problem with that at all.

But it’s important that we do so very carefully because there are definite trade-offs.

To create the standards that make it into the platform, careful care is given to each and every feature to minimize the security risks. Every new feature has to be carefully considered from an accessibility perspective to make sure that not only does it not cause harm, but that assistive technology has all the information it needs to be able to provide people with a usable experience. Performance has to be top of mind for each new standard, to ensure that shipping it won’t cause undo bloat or other performance issues.

And each of these things must not be considered simply in one single context, but for all sites and across geographies, languages, devices, and browsing clients.

Can you say, with confidence, that the same level of care is given in the tools and frameworks we use or build?

Use the platform until you can’t, then augment what’s missing. And when you augment, do so with care because the responsibility of ensuring the security, accessibility, and performance that the platform tries to give you by default now falls entirely on you.

the-benefits-of-using-marketing-automation

The proliferation of digital channels and devices has made it more difficult for B2B marketers to accurately target prospects with the right messages, on the right devices, at the right times. Prospects are managing more of the buying process themselves, creating decision short lists by researching brand websites and social channels without ever speaking to a sales rep. To be effective, B2B marketers must be creative, targeted and aligned with sales goals; they must also have greater visibility into buyer attributes and behaviors.

Faced with these challenging market dynamics and increasing ROI pressure, B2B marketers at companies of all sizes can gain these benefits from a marketing automation platform:

  • Increased marketing efficiency. Automating time-consuming, manual tasks around content creation, management and personalization; campaign scheduling and execution; data hygiene; communication with sales; and lead nurturing saves time and improves productivity.
  • Enhanced ability to generate more and better qualified leads. Marketing automation can combine multiple criteria, including demographic, firmographic and behavioral data with a lead scoring system to generate and identify sales-qualified leads.
  • A multichannel view of prospect behavior. Today’s marketing automation platforms are integrating multiple channels and devices including social media and mobile — to create more comprehensive prospect profiles and holistic views of prospect behavior.
  • Better alignment of sales and marketing goals. Marketing automation software can help align sales and marketing efforts to ensure that sales reps are working with sales-ready leads. By working cooperatively to set scoring parameters and define qualified leads, sales and marketing become one team.
  • Improved lead conversion and ROI. Numerous studies have found that using a marketing automation system can increase conversions. Forrester found that B2B marketers implementing marketing automation experience a 10% increase in their sales-pipeline contribution.

If you’re considering a marketing automation platform, this report can help you decide if you need one. “B2B Marketing Automation Platforms: A Marketer’s Guide” examines the market for marketing automation platforms and the considerations involved in implementing this software into your business.

Included in this 48-page report are profiles of 14 leading marketing automation vendors, pricing charts, capabilities comparisons and recommended steps for evaluating and purchasing.

Visit Digital Marketing Depot to download your copy.



About The Author

Digital Marketing Depot is a resource center for digital marketing strategies and tactics. We feature hosted white papers and E-Books, original research, and webcasts on digital marketing topics — from advertising to analytics, SEO and PPC campaign management tools to social media management software, e-commerce to e-mail marketing, and much more about internet marketing. Digital Marketing Depot is a division of Third Door Media, publisher of Search Engine Land and Marketing Land, and producer of the conference series Search Marketing Expo and MarTech. Visit us at http://digitalmarketingdepot.com.



using-github-template-repos-to-jump-start-static-site-projects

If you’re getting started with static site generators, did you know you can use GitHub template repositories to quickly start new projects and reduce your setup time?

Most static site generators make installation easy, but each project still requires configuration after installation. When you build a lot of similar projects, you may duplicate effort during the setup phase. GitHub template repositories may save you a lot of time if you find yourself:

  • creating the same folder structures from previous projects,
  • copying and pasting config files from previous projects, and
  • copying and pasting boilerplate code from previous projects.

Unlike forking a repository, which allows you to use someone else’s code as a starting point, template repositories allow you to use your own code as a starting point, where each new project gets its own, independent Git history. Check it out!

Let’s take a look at how we can set up a convenient workflow. We’ll set up a boilerplate Eleventy project, turn it into a Git repository, host the repository on GitHub, and then configure that repository to be a template. Then, next time you have a static site project, you’ll be able to come back to the repository, click a button, and start working from an exact copy of your boilerplate.

Are you ready to try it out? Let’s set up our own static site using GitHub templates to see just how much templates can help streamline a static site project.

I’m using Eleventy as an example of a static site generator because it’s my personal go-to, but this process will work for Hugo, Jekyll, Nuxt, or any other flavor of static site generator you prefer.

If you want to see the finished product, check out my static site template repository.

First off, let’s create a template folder

We’re going to kick things off by running each of these in the command line:

cd ~
mkdir static-site-template
cd static-site-template

These three commands change directory into your home directory (~ in Unix-based systems), make a new directory called static-site-template, and then change directory into the static-site-template directory.

Next, we’ll initialize the Node project

In order to work with Eleventy, we need to install Node.js which allows your computer to run JavaScript code outside of a web browser.

Node.js comes with node package manager, or npm, which downloads node packages to your computer. Eleventy is a node package, so we can use npm to fetch it.

Assuming Node.js is installed, let’s head back to the command line and run:

npm init

This creates a file called package.json in the directory. npm will prompt you for a series of questions to fill out the metadata in your package.json. After answering the questions, the Node.js project is initialized.

Now we can install Eleventy

Initializing the project gave us a package.json file which lets npm install packages, run scripts, and do other tasks for us inside that project. npm uses package.json as an entry point in the project to figure out precisely how and what it should do when we give it commands.

We can tell npm to install Eleventy as a development dependency by running:

npm install -D @11ty/eleventy

This will add a devDependency entry to the package.json file and install the Eleventy package to a node_modules folder in the project.

The cool thing about the package.json file is that any other computer with Node.js and npm can read it and know to install Eleventy in the project node_modules directory without having to install it manually. See, we’re already streamlining things!

Configuring Eleventy

There are tons of ways to configure an Eleventy project. Flexibility is Eleventy’s strength. For the purposes of this tutorial, I’m going to demonstrate a configuration that provides:

  • A folder to cleanly separate website source code from overall project files
  • An HTML document for a single page website
  • CSS to style the document
  • JavaScript to add functionality to the document

Hop back in the command line. Inside the static-site-template folder, run these commands one by one (excluding the comments that appear after each # symbol):

mkdir src           # creates a directory for your website source code
mkdir src/css       # creates a directory for the website styles
mkdir src/js        # creates a directory for the website JavaScript
touch index.html    # creates the website HTML document
touch css/style.css # creates the website styles
touch js/main.js    # creates the website JavaScript

This creates the basic file structure that will inform the Eleventy build. However, if we run Eleventy right now, it won’t generate the website we want. We still have to configure Eleventy to understand that it should only use files in the src folder for building, and that the css and js folders should be processed with passthrough file copy.

You can give this information to Eleventy through a file called .eleventy.js in the root of the static-site-template folder. You can create that file by running this command inside the static-site-template folder:

touch .eleventy.js

Edit the file in your favorite text editor so that it contains this:

module.exports = function(eleventyConfig) {
  eleventyConfig.addPassthroughCopy("src/css");
  eleventyConfig.addPassthroughCopy("src/js");
  return {
    dir: {
      input: "src"
    }
  };
};

Lines 2 and 3 tell Eleventy to use passthrough file copy for CSS and JavaScript. Line 6 tells Eleventy to use only the src directory to build its output.

Eleventy will now give us the expected output we want. Let’s put that to the test by putting this In the command line:

npx @11ty/eleventy

The npx command allows npm to execute code from the project node_module directory without touching the global environment. You’ll see output like this:

Writing _site/index.html from ./src/index.html.
Copied 2 items and Processed 1 file in 0.04 seconds (v0.9.0)

The static-site-template folder should now have a new directory in it called _site. If you dig into that folder, you’ll find the css and js directories, along with the index.html file.

This _site folder is the final output from Eleventy. It is the entirety of the website, and you can host it on any static web host.

Without any content, styles, or scripts, the generated site isn’t very interesting:

Let’s create a boilerplate website

Next up, we’re going to put together the baseline for a super simple website we can use as the starting point for all projects moving forward.

It’s worth mentioning that Eleventy has a ton of boilerplate files for different types of projects. It’s totally fine to go with one of these though I often find I wind up needing to roll my own. So that’s what we’re doing here.



  
    
    
    Static site template
    
    
    
  
  
  

Great job making your website template!

We may as well style things a tiny bit, so let’s add this to src/css/style.css:

body {
  font-family: sans-serif;
}

And we can confirm JavaScript is hooked up by adding this to src/js/main.js:

(function() {
  console.log('Invoke the static site template JavaScript!');
})();

Want to see what we’ve got? Run npx @11ty/eleventy --serve in the command line. Eleventy will spin up a server with Browsersync and provide the local URL, which is probably something like localhost:8080.

Even the console tells us things are ready to go!

Let’s move this over to a GitHub repo

Git is the most commonly used version control system in software development. Most Unix-based computers come with it installed, and you can turn any directory into a Git repository by running this command:

git init

We should get a message like this:

Initialized empty Git repository in /path/to/static-site-template/.git/

That means a hidden .git folder was added inside the project directory, which allows the Git program to run commands against the project.

Before we start running a bunch of Git commands on the project, we need to tell Git about files we don’t want it to touch.

Inside the static-site-template directory, run:

touch .gitignore

Then open up that file in your favorite text editor. Add this content to the file:

_site/
node_modules/

This tells Git to ignore the node_modules directory and the _site directory. Committing every single Node.js module to the repo could make things really messy and tough to manage. All that information is already in package.json anyway.

Similarly, there’s no need to version control _site. Eleventy can generate it from the files in src, so no need to take up space in GitHub. It’s also possible that if we were to:

  • version control _site,
  • change files in src, or
  • forget to run Eleventy again,

then _site will reflect an older build of the website, and future developers (or a future version of yourself) may accidentally use an outdated version of the site.

Git is version control software, and GitHub is a Git repository host. There are other Git host providers like BitBucket or GitLab, but since we’re talking about a GitHub-specific feature (template repositories), we’ll push our work up to GitHub. If you don’t already have an account, go ahead and join GitHub. Once you have an account, create a GitHub repository and name it static-site-template.

GitHub will ask a few questions when setting up a new repository. One of those is whether we want to create a new repository on the command line or push an existing repository from the command line. Neither of these choices are exactly what we need. They assume we either don’t have anything at all, or we have been using Git locally already. The static-site-template project already exists, has a Git repository initialized, but doesn’t yet have any commits on it.

So let’s ignore the prompts and instead run the following commands in the command line. Make sure to have the URL GitHub provides in the command from line 3 handy:

git add .
git commit -m "first commit"
git remote add origin https://github.com/your-username/static-site-template.git
git push -u origin master

This adds the entire static-site-template folder to the Git staging area. It commits it with the message “first commit,” adds a remote repository (the GitHub repository), and then pushes up the master branch to that repository.

Let’s template-ize this thing

OK, this is the crux of what we have been working toward. GitHub templates allows us to use the repository we’ve just created as the foundation for other projects in the future — without having to do all the work we’ve done to get here!

Click Settings on the GitHub landing page of the repository to get started. On the settings page, check the button for Template repository.

Now when we go back to the repository page, we’ll get a big green button that says Use this template. Click it and GitHub will create a new repository that’s a mirror of our new template. The new repository will start with the same files and folders as static-site-template. From there, download or clone that new repository to start a new project with all the base files and configuration we set up in the template project.

We can extend the template for future projects

Now that we have a template repository, we can use it for any new static site project that comes up. However, You may find that a new project has additional needs than what’s been set up in the template. For example, let’s say you need to tap into Eleventy’s templating engine or data processing power.

Go ahead and build on top of the template as you work on the new project. When you finish that project, identify pieces you want to reuse in future projects. Perhaps you figured out a cool hover effect on buttons. Or you built your own JavaScript carousel element. Or maybe you’re really proud of the document design and hierarchy of information.

If you think anything you did on a project might come up again on your next run, remove the project-specific details and add the new stuff to your template project. Push those changes up to GitHub, and the next time you use static-site-template to kick off a project, your reusable code will be available to you.

There are some limitations to this, of course

GitHub template repositories are a useful tool for avoiding repetitive setup on new web development projects. I find this especially useful for static site projects. These template repositories might not be as appropriate for more complex projects that require external services like databases with configuration that cannot be version-controlled in a single directory.

Template repositories allow you to ship reusable code you have written so you can solve a problem once and use that solution over and over again. But while your new solutions will carry over to future projects, they won’t be ported backwards to old projects.

This is a useful process for sites with very similar structure, styles, and functionality. Projects with wildly varied requirements may not benefit from this code-sharing, and you could end up bloating your project with unnecessary code.

Wrapping up

There you have it! You now have everything you need to not only start a static site project using Eleventy, but the power to re-purpose it on future projects. GitHub templates are so handy for kicking off projects quickly where we otherwise would have to re-build the same wheel over and over. Use them to your advantage and enjoy a jump start on your projects moving forward!