Organizations often encourage collaboration and transparency by promoting day-to-day activities like pairing, standups, and code reviews. While these activities are very important, we also find that a solid build and deployment pipeline can make all the difference between a well-meaning team hitting or just missing the target. That’s because working software provides a tangible focal point for collaboration and transparency across skill sets.
The best build and deployment pipelines allow stakeholders and the project team to engage together early and often to focus on the product more and the mechanics of the process less. Let’s look at what makes a solid build and deployment pipeline regardless of the technology stack.
No Thought Deployment
The best pipelines don’t require our teams to think about deploying. Instead, the team just decides if code is ready to be shared via a push to GitHub (or your favorite Git hosting service). The pipeline takes care of the rest.
Heroku and Netlify do this really well. When a Heroku or Netlify app is connected to a GitHub repository, you can configure them to deploy with each commit to any branch.
In some situations, purpose-built services like Heroku or Netlify aren’t an option. Clients use tools that serve infrastructure constraints or other demands. That’s ok. The same experience can be created with tools like Jenkins, Bamboo, and Azure DevOps Services.
If you’re not able to automatically deploy to production at the moment, targeting a pre-production environment is a great first step. It allows teams to see early the effects of changes integrated with those of their teammates. Product teams can interact and provide feedback on features they had otherwise been describing using lower fidelity techniques like wireframes, designs, or descriptions. Collaboration for the win!
Automated Data and Schema Transformations
You might ask how automated deployment is possible when it comes to data transformations, schema changes, or other stateful application data changes. These can all be worked out, I assure you. Schema migrations are a common and reliable mechanism built into most modern platforms, including .Net, PHP, Ruby, Node, etc.
For example, Knex.js allows you to craft migrations to create and alter tables for popular relational databases like MySQL, PostgreSQL, and Microsoft SQL Services. When included in the git repository, the commits provide a history of changes to schema and data transformations that correspond to dependent application code changes. These can follow the same pipeline to production as any other change to your system.
Of course, you might be concerned about tight dependencies between application code and data. Or maybe you’re worried about the performance impact of data transformations on large data sets. At times, it can be helpful to execute migrations and transformations offline or in the background. In order to do this, we need to distinguish between deploying code to an environment and releasing the functionality that code enables to users.
Decoupling deployment from release enables high-frequency deployments. This allows you to make features available (released) to some user groups before they are available across your user base. By building your application in this manner and moving data transformations offline, you can deploy and enable features early for users as their data is transformed. This brings to mind data partitioning techniques aligned with feature flags for progressive feature enablement.
For example, you may deploy new functionality and its corresponding data changes to production while disabling those features via feature flags. Via a scheduled task, background worker, or queuing, data and schema changes can be executed while normal activity continues on the system. Once the changes are available, the feature flag can be enabled for a set of users, and they see the new behavior. At no point was the system unavailable for them.
Collaboration Enabled
We’re always looking for ways to tighten feedback loops. Once code is merged, it’s too late to undo. While we agree with the authors of the book Accelerate that teams should roll forward with fixes rather than revert them, we’d love to avoid the need for fixes in the first place.
To do this, we look for pipelines that allow our teams to deploy (again, without thought) potential changes to the product. In development terms, this means deploying a pull request. By integrating automatic deployments into the natural source code collaboration tools, we make high fidelity collaboration and first-class confirmation of changes a natural part of development.
Out of the box, Heroku and Netlify both support this model of deploying pull requests automatically, which is wonderful! Continuous integration tools have long had the ability to build and, should you choose, deploy branch code. On the web, this becomes a little more tricky as you need to provide a unique URL for each instance. It can be done, but we like to rely on purpose-built services like Heroku and Netlify where we can.
Precooked and Ready to Run
We want the result of a build to create a deployable artifact. The form that the artifact takes isn’t really important. We have some projects for which CircleCI simply creates and stores a tarball or zip file after successfully building. For some, a Docker image is created and made available. For you, it may be an installer, such as Apple’s DMG format, Microsoft’s MSI, or one of the popular Linux formats. The goal is the same: produce a runnable artifact that can be deployed many times without change.
By building once and storing the deployable results, our deployments are faster, more reliable, and we can have more confidence in the artifact working in each environment. Speed and reliability create tight feedback loops and drive up confidence—two things we’re always trying to achieve.
Sparkbox works with organizations using varying processes, tools, and techniques. Over time, we have honed a set of tools that work exceptionally well. Above all, our goal is to be flexible while advocating for the practices that we see bringing success to projects. A build and deployment pipeline that encourages collaboration and transparency, without getting in the way, brings an irreplaceable heartbeat to successful projects.
More and more companies are hiring their first remote team members and leading remote companies are open and friendly about sharing their experiences with them. There are services available to help your company to start working remotely.
More and more companies are starting to realize the benefits remote work brings and are considering hiring their first remote team members.
A lot of companies are already working remotely from the office
Communication between team members is often already happening in a virtual office even if people are sitting in the physical office.
Sometimes team members might be sitting next to each other, but still communicate through text messages or on a chat channel so as not to disturb others workflow. This way they can work through the messages when they have the time.
If your company is already mostly communicating virtually, switching to remote can be quite easy. You just stop coming to the office!
This is exactly what Marketgoo did – they started building their do-it-yourself SEO tool in the office and have been gradually moving remote. First, the marketing manager and then the whole development team started working remotely. When there were no people left in the office, they did the only reasonable thing – closed the office for good! “Co-working spaces, coffee shops (our biz dev guy is a regular at Tim Horton’s!), working from the beach, home office, etc” answered Marketgoo’s marketing manager Larissa when I asked her about where their team mostly work from now that they don’t have an office anymore.
Marketgoo CEO shares some great feedback on how he feels about going fully remote: “4 months after closing the office and I don’t miss it at all! I am way more productive when working on the go. Before, I was tied to do big stuff only at the desk. I adapt much better now to any situation or venue. Only con is that I feel like a hermit sometimes”
Feeling like a hermit is quite common as loneliness has been reported as one of the biggest struggles that comes with remote work.
Advice from remote companies
Remote companies who are the frontrunners of the change to remote work are usually very open and friendly about sharing their experiences.
Some leading remote companies have shared their advice on RemoteHub for companies planning to start working remotely.
GitLab, who is currently #1 most distributed team on RemoteHub with more than 250 cities across 51 countries and 60 time zones, says they have learned a lot about how to collaborate effectively and strengthen their culture while growing their remote team.
One of the leading remote companies Doist puts the emphasis on building a strong set of core values and uses them as a foundation to develop their remote culture.
They also recommend on choosing tools that will help your remote team stay connected and productive.
Doist is well-known in the remote circle by sharing their experiences as a team working remotely from 25 countries and building productivity tools like Todoist and Twist that are widely used by other remote companies.
As people are working from different places and often on different schedules, it’s even more important to be responsible for your work.
“First, we make sure people can and do take ownership of their work,” says Teamweek – a tool to plan your project timeline by a distributed company working across 10 countries.
They also emphasise the importance of honest communication and open discussions by expecting team members to share their plans, successes and failures, but they also keep the culture fun by sharing memes.
Services to help you build a remote team
As remote work becomes more and more popular, it is now possible to get some professional help to build a successful remote team.
Remote-how helps to get the most out of remote work by training the teams to work remotely. They’ve built a 6-week online program where industry-leading experts teach about how to build and lead effective distributed teams. There’s also a real live conference to help you build and scale a remote team.
If you’d like to meet some remote team leaders in person, there’s Running Remote conference that helps you to build and scale a remote team. The conference is packed with speakers from leading remote companies. They started the conference in “remote work paradise”, Bali in 2018 and their next conference is taking place in Austin, TX in the spring of 2019.
Start slowly
It probably might not be a good idea to close your office next Monday to have everyone working from wherever and see what happens. But if your company is working in a field where people are doing their most of their jobs on laptops and phones and you’re interested in flexibility and benefits remote work offers, you can gradually start converting your company to remote.
For a start, try to introduce a few days a week where your team works from home and gradually move to have more and more remote days over time.
Have you ever wondered how real-time apps like chat apps and online games are built? If you’ve never built one before, read this tutorial as I’m going to show you how to build a simple one using socket.io.
What are we going to build?
It’s a simple app with one button and a label below it. The label displays “Likes: X” (where x is the current number of likes). When the user clicks on the button, the number of likes increases by one.
We’re going to make it real time by showing users on the app how the number of likes increases as other users are clicking on the button. So you don’t need to reload the page to see the latest value.
Here’s how the app would look like:
You can get the source code of this project on GitHub.
Creating a new project
In a new folder, add package.json using npm init -y, and then install these three packages:
npm install express ejs socket.io
We’ll use ejs as the templating engine, and socket.io for making our app a real-time app.
Displaying a hello world page
As mentioned above, we’ll use ejs for rendering our views. So create index.ejs and add the following:
Realtime like app
Hello World!
Now let’s create our node server and serve the above file as the homepage.
So we created a new server that runs on port 3000. When the user hits http://localhost:3000/ in the browser, we’ll render index.ejs and display it.
If you run the app using node index.js (or using nodemon if you want the app to restart automatically on changes) and open http://localhost:3000/, you should see “Hello World!” displayed.
Adding style.css
This isn’t a CSS tutorial, so let’s quickly add style.css in the root directory and fill it with this:
To keep this example simple, we defined likes in the memory, which means its value will go back to 0 when the server restarts. Typically in real-world apps you’ll have your data stored in the database.
Incrementing likes by clicking on the button
To do so, we need to add a new route that increments likes and returns the new value. And then we’ll make a request to this endpoint from index.ejs, when the user clicks on the button.
Now let’s listen for the button’s click event and send this request using the Fetch API.
Add the following above
:
The app is now ready to be used but without showing the updated value in realtime. So if you open the app in multiple browser windows and try to update one, you won’t see the value updated in realtime on other browser windows until you reload them.
Making it a real-time app
Socket.io isn’t the only way to build real-time apps but it’s the most popular one. Not only that, but it’s also very good and easy to use.
We’ve already installed socket.io, so let’s initialize it.
But before I show you how, note that socket.io is composed of two parts:
The server that we integrate with node http server.
The client library that we use on the browser to communicate with the server part.
To initialize the server part, open index.js and update the top part like this:
So we imported socket.io and passed it the http server object.
Now let’s use the http object to run the server instead of app.
http.listen(3000, () => console.log('the app is running on localhost:3000'))
With that, socket.io is initialized on the server!
When socket.io is initialized it exposes /socket.io/socket.io.js endpoint. This endpoint contains the JS file that we’ll use in the browser to connect with socket.io. (So it’s the socket.io’s client library.)
Go to index.ejs and include that file above the tag:
After you add this, you should have the io object exposed globally on the browser (check that from the console).
To connect the browser to the server, just call io() and store the returned socket in a variable.
So put the following at the top of your section (above const btn = document.querySelector('button')).
const socket = io()
Next, let's make the server listen for connections from clients.
To make sure it's working, open up the browser and check your terminal. When you open a new tab, you'll see "a user connected!", and when you close it, you'll see "user disconnected!".
The socket object is related to the connected user. Note how we listen for a disconnect event on that specific socket — not on all sockets.
Three steps left to complete this app:
When the user clicks on the like button, we should emit an event to the server about that.
When the server is notified about that, it re-emits that event with the new value of likes to all other users (excluding the user who clicked the button).
When other clients receives that event, they would update the likes label with the new value.
So step 1, let the server know that we updated the number of likes. To do this, open index.ejs and add this at the bottom of the button's click handler:
socket.emit('likes:updated')
likes:updated is the name of the event — you can name it whatever you want.
Step 2, the server should listen to this event and re-emit it to other clients. So in index.js update your connection listener like this:
If you’ve had an idea for a promising business, chances are, you’ve probably resorted to some sort of freelance site or had a friend with Illustrator draft up a couple concepts for you.
This would’ve been your first mistake. I want you to think of building a brand like going on a first date. If you’re desperate to make a good first impression (as you should be) then you’ve probably put your best outfit on your bed and swapped out your articles of clothing 10 different times to find the best combination.
Aha! You’ve found the perfect one! But it doesn’t stop there. You still have to be friendly, have good manners, LISTEN TO YOUR DATE, and tell a great story yourself!
A logo is only as good as the shirt you decided to wear.
For the purpose of this article, I’m going to walk you through how to build a lean startup brand from scratch by actually building a lean startup brand from scratch. The company I’m starting today is an all natural, vegan, organic, CBD dog treat that is made in the USA.
CBD pet products are popping up everywhere, becoming a highly competitive space, and that means branding is everything.
Let’s go ahead and get started.
1. Go Back to Your Story
When you first thought about building a brand around your venture, your first ideation session was likely focused around what you offer as a product or service. For example, if you’ve started a landscaping business, you’ve probably thought of a minimal grass logo with a green color palette. This is a really good way of blending in with every other landscaper in your area. Not exactly what we want to do here.
So what should you focus on instead of your product/service? Go back to the very beginning! Your personal story as a startup founder.
Where were you born?
How did your parents raise you and how did that impact your reason behind starting this venture?
Are you starting right out of school or have you been working at a corporate job for the past 25 years? How does that affect your reason behind starting this venture?
As you dig deeper to understand your personal life story, you will recognize the truth behind your motivation to start this business.
As much as I’d love to take a deep dive into my personal story behind starting a CBD dog treat company, it would take this article way over the word limit I’ve planned, so I’m going to leave this part up to you.
2. Set Your Brand Voice
Hopefully you’ve thought a lot about your personal story because now we need to take that story and highlight some keywords that will act as your brand voice. For my CBD dog treat company, I’ve come to the words: Loyal, Warm, and Confident.
CBD is still a pretty regulated industry, so using CBD in pet products has stirred up a bit of controversy. For these reasons, the brand needed to convey a warm homeyness in order to have consumers feel relaxed when interacting with the brand. The brand also needs to focus on doggy confidence as the end result, rather than the state of relaxation, in order to differentiate. Finally, loyalty leads our brand voice because I want the company to convey the reason that we have such an immense love for dogs in the first place: loyalty. I want our company to be loyal to our customers, I want our customers to be loyal to our company, and I want the dogs that we help to be loyal to their owners.
The easiest way to choose your brand voice is to just pull out a piece of scrap paper and write down every word that comes to mind when thinking about your story and your business. Honesty. Quality. Loyalty. Money. People. Family. Respect. Growth. Nothing is wrong here, but think about how those words are going to make your end consumers feel.
Once you have a solid list of words, go through them and pick three that best describe the vibe that you want your business to have.
3. Decide on a Company Name
The 5 biggest factors to consider when deciding on a name for your new company are how easy your name is to spell, how easy it is to remember, if it has room for expansion, if it’s available, and how it fits in with your brand voice.
Pick a name that is easy to spell. There are about a million stories of founders that chose names for their business and struggled to get traffic because their consumers didn’t remember how to spell it, let alone the effect it has on search engine rankings.
Pick a name that is easy to remember. Similarly to spelling, you don’t want to lose out on customers because they can’t remember what your name was. Make your name short and sweet, but unique.
Don’t limit your business from day one by picking a name that is limiting. Naming your business “Lug Nuts of Athens, Ohio” might be the perfect descriptor for your business now, but what happens when you realize that there is an opportunity for you to change brake pads? Or changing break pads in Cleveland, Ohio? Imagine if Jeff Bezos named his business “BooksOnline” instead of “Amazon”.
Is your name even available? The easiest way to do this is a quick google search followed by a domain name search. If you can’t get the “.com” domain for the name you have in mind, you should think about going back to the drawing board.
Based on my brand voice: Loyal, Warm, and Confident, a name that immediately came to mind was “Feel”. I thought it matched my brand voice pretty well and thought that there could be some clever marketing designed around pups that were “in their feels”.
Upon further research, there was already a CBD company called “Feals”, and I surely wasn’t going to get the domain name “feel.com” without reaching deep into my own pockets.
After going back to the drawing board, I realized I was missing out on a huge opportunity. People love puppies. Having my name involve the word “pup” would be easy to build a brand around and still evokes warm, friendly, and inviting emotions. Since we’re playing in a cannabis industry, I came up with the name “PupLeaf”. Unfortunately, the .com domain was already taken, so I tried “LeafPup”.
It was available! And I liked the sound of it better anyway.
Thank you. Check your inbox 🙂
Oops! Something went wrong while submitting the form. Try again.
4. Write Your Mission Statement
A mission statement is a staple for building internal culture and driving your future marketing materials. Having your brand voice in place should make the process of coming up with the perfect mission statement a lot easier.
My favorite mission statement example is “We believe in a world where people belong anywhere”. Can you guess the company behind it?
Airbnb.
Traveling can be intimidating, especially when traveling to places with radically different cultures. Airbnb wants to make us feel like we belong anywhere. It’s a powerful message that will appeal to a lot of people and the statement doubles as a backbone for their brand.
Everything you create for your business should tie into your mission statement and communicate your brand voice. Here are some other examples of world famous mission statements used to build powerful brands.
Amazon – “Our vision is to be Earth’s most customer-centric company; to build a place where people can come to find and discover anything they might want to buy online.”
Facebook – “To give people the power to build community and bring the world closer together.”
Google – “To organize the world’s information and make it universally accessible and useful.”
Sony – “To be a company that inspires and fulfills your curiosity.”
Notice how every mission statement is reduced down to the bare minimum required to describe the impact of the business. Your statement should follow this as well.
To create a mission statement for LeafPup, I went back to the word “confidence” from my brand voice. I wanted LeafPup to help dogs of all ages manage anxiety, discomfort, and fear in order to see more confident pups at the end of the day.
And so it became, LeafPup – “to help dogs of all ages manage anxiety, fear, and discomfort in order to see more confident pups at the end of the day”
5. Put Together a Mood Board
Here’s where you get to see your brand start coming together and it’s probably the easiest part to do. Pinterest is your best friend here if you aren’t design savvy.
All you have to do is create a Pinterest board for your brand and start searching for images using the keywords you chose for your brand voice. For LeafPup: loyal, warm, and confident images become the filter for “pins” that fit the brand.
After pinning about 50 different images to our company board, we had a pretty good idea what our brand was going to look like!
We picked a few of our favorite images and made a collage to narrow down the overall “vibe”. Here’s what it looks like.
6. Find Your Colors
It might be tempting to just pick your favorite color as your primary brand color, but it surely isn’t the most logical thing to do. We had you make a mood board for a reason!
Your mood board should tell you exactly what colors you are going to add to your palette! If you found a lot of images for your mood board that have a similar color blue, then it makes a lot of sense for that shade of blue to be in your color palette.
To really speed up the process of finding a color palette, we use Adobe Color. Don’t worry, it’s free! You can upload or browse through images that fit the vibe of your brand and Adobe Color will automatically create a color palette for you!
For LeafPup, most of our images had an early-autumn color palette: browns, oranges, reds, and subtle greens. Rather than using the color green, I’m opting to use actual images of house plants and trees to convey our branding. Then, I’m taking a khaki color to use as a background against a dark brown type color. This will create the warm, homey vibe that I’m looking for. I pulled these colors from various images in my mood board and picked a few colors that would soften everything up. Finally, I picked a soft teal accent color to bring out the “confidence” that LeafPup is coming to stand for.
7. Design Your Logo
We realize this is where it gets kind of tricky if you don’t happen to be a graphic design wiz. More than likely you’ll have to outsource to a designer or design team, but at least now you have a solid brand direction to hand off.
You can find quality designers on popular freelance platforms like Fiverr, but it’s really important that you provide them with everything you’ve worked on up until this point. You don’t want to miscommunicate.
Your logo should incorporate your brand voice keywords and be able to blend well with the rest of your mood board. Let’s take a look at how Airbnb used their keywords to craft their logo icon!
Don’t get too caught up with trying to make a logo that describes what you do. You’ll have a million chances to tell consumers what your business does. At the end of the day, your logo is an identity that needs to meet 3 criteria. It has to be simple, appropriate, and unique. Somebody should be able to look at your logo for 5 seconds, turn around, and then draw a logo that could only resemble your logo on a blank piece of paper.
If your logo can pass that test, you probably have something you can move forward with as long as it’s appropriate. By that, I mean that it is appropriate for the rest of your brand. Remember, it has to fit in with the rest of your mood board.
Being a graphic designer with experience in building brands, I wanted to take the image of the puppy holding a leaf that I used in my mood board to create a minimal mascot icon for the LeafPup brand. The cuteness of the mascot would attract attention while the warm, natural color palette tell our story of loyalty and warmth.
With our products being CBD infused, the leaf was made to slightly resemble a cannabis leaf without screaming “cannabis company”. Most importantly, the logo is simple, memorable, and entirely unique. It passes the test.
8. Choose Your Typography
There are thousands of fonts for you to choose from, but you can’t rely on a gut feeling. Making the right decision depends on function, context, and a whole set of other factors.
Think about personality. Think back to your brand voice. If your brand voice is friendly, you might look for rounded fonts that are really easy to read. If your brand voice is confident, you might look for bold, condensed fonts. Maybe all caps? When you look through all your font options, think about the way they make you feel. Do they match your brand voice to the tee or could it be better communicated?
Think about function. What font are you using for what purpose? At the bare minimum you will want to find a font for your headings and a font for paragraphs. A lot of brands choose a 3rd font that acts as a brand element on it’s own.
In the above example, you can see how fonts are used functionally. There are 2 fonts at play here: Franklin Gothic URW Medium as the font in charge of getting your attention (main headings and buttons) and Adobe Garamond Pro Regular as the secondary fonts for body content and subheadings. You should take a similar approach when picking the fonts for your brand.
With LeafPup being an all natural company with a natural color palette, we needed a font that would function perfectly while communicating our brand voice proudly. A mix of serif and sans-serif font in neutral colors tell our consumers that we’re not going anywhere. We’re loyal. Using a larger font size for headings will communicate confidence, and using a softer font for body paragraphs will make our customers feel the warmth and homeyness we need to convey as well. Here’s what this looks like.
9. Put Everything Together
Now that you have everything that meets the bare minimum to build a solid brand, it’s important that you take the time to present everything that you’ve worked on. Some call this a style guide, some call it brand guidelines, and some call it your brand book. Regardless, it’s a necessary step in building a brand.
Your style guide will be your bible in everything you do from here on out. You should look to it for advice on the best direction to go. Not just with design, but internal culture as well. If your brand voice and mission statement involves the word “peaceful”, you should prioritize making your working environment peaceful, making your customers at peace when they interact with you, and having a tone of voice that is soft and relaxing.
In the above example, everything we talked about in this guide is strategically laid out to clearly communicate all brand elements. When you look at LeafPup’s style guide, you know exactly who they are, what they stand for, and the design elements that help them stand out.
An easy way to make a style guide is to make a Powerpoint or Google Slide deck. Each slide explains and depicts the different elements of your brand.
10. Be Consistent
Your brand doesn’t stop developing here. It never ends. Fortunately, you’ve set yourself up for success by creating your style guide. The next time you need a flyer designed for an event your hosting, you can send your style guide to your designer and they will have everything they need to make sure your brand stays cohesive throughout every design project.
Everything that involves your business should fit perfectly into your style guide as an example of your design elements. Your company t-shirts, business cards, website, packaging, investor slide deck, and everything else you can think of should follow suit with your style guide.
Of course, I can’t help but to promote my real startup here. I started Pence Branding because something felt wrong about hiring a big branding agency (that charges hundreds of thousands of dollars) to create a beautiful brand around your startup idea only to send you on your way to hold up those brand standards yourself when you need a small flyer designed in the future.
At Pence Branding, we go through this process of designing your brand style guide and then roll into an unlimited graphic design/web design subscription with 48 hour turnaround times to ensure your brand elements are always used to the tee. It’s like having your very own in-house design team.
For LeafPup, the first design project that we tackled was packaging. Since CBD has turned into such a competitive market, conveying LeafPup’s brand identity through well-designed packaging was vital for standing out in the industry. Rather than simply hopping on the CBD train, we needed to create a brand that focused on educating consumers while having an empowering message to connect with and support.
After we finalized our packaging, we were able to move forward with our next project: the LeafPup e-commerce site. As mentioned earlier in this article, although the CBD industry is expected to reach $40 billion within the next 5 years, it is still heavily regulated.
Currently the largest retailers and advertising platforms, like Amazon, Facebook, and Google don’t even support CBD sales. This means one thing: LeafPup’s direct-to-consumer platform (the e-commerce site) has to function beautifully while still communicating all of LeafPup’s creatively designed brand elements.
Here’s the e-commerce site we were able to design within 24 hours of finishing the LeafPup brand style guide.
You can see the full site prototype hereundefined.
Focusing your attention on building a style guide will make all future design projects a breeze. The next time you need to design a business card, website, or flyer, you will have your logo, color palette, fonts, and usage examples on hand for you to simply plug-in and go to market.
After all, time is of the essence.
Conclusion
We hope you enjoyed reading about our process of building a lean startup brand from scratch. All-in-all, the process took us 72 hours from idea to e-commerce platform. For those of you who are wondering, we are actually planning on launching the company 2 months from today (after funding is secured for our initial inventory purchase).
A marketing campaign is a project. There is a start and end date, specific deliverables, and measurable results. Building a brand isn’t a project and you can’t treat it like one. Building a brand can be more similarly related to growing as a person. You might spend your first 5 years figuring out who you are and adapting your brand elements as you go. Your brand will mature over time, develop more traits, quit bad habits, and might even end up being the opposite of what it was when it first started out.
You have to let the life your business has gone through impact your brand like the life you’ve personally gone through has impacted your current values and personality.
Of course, if you never develop your initial brand identity, there will be nothing to change. Start by looking into your personal story, set your brand voice, put together a mood board, come up with a simple name, write your mission statement, find your colors, design your logo, pick your fonts, and then put everything together into a style guide (a slide deck will work great!). After you get that far, be consistent, but let your brand go through the necessary changes to stay relevant, mature, and last a lifetime.
The best brands mature at the same time as their customers. While design might matter to their customers now, maybe comfort will matter 10 years from now. Let your brand adapt. Let your brand have its own personality and experiences. At the end of the day, your customers and employees will end up developing the bulk of your brand anyways. Your job is to communicate the values that your customers/employees develop through beautiful design.
✉️ Subscribe to receive weekly startup related articles!
We’re always publishing new articles about startups. Sign up for our newsletter to keep updated on the latest additions. You can unsubscribe whenever you want!
At a startup, accessibility can feel daunting because resources are limited, but new technology and smaller scope can be a unique benefit. Making the investment earlier on will save a lot of time and money in the long run.
Although this article is based on my experience as the only UX person working on an earlier stage product with a small team, many of the steps and resources can be used by larger organizations with more mature products.
Accessibility is the practice of making your websites usable by as many people as possible.
Why does accessibility matter?
It’s good for business. The more users that can use the software, the bigger the market. Additionally, if your company is going after any government contracts or putting a consumer facing app into an app store, accessibility will likely be a requirement.
It’s good for users. Globally, 2.2 billion people are blind, have low vision, or colorblindness, and 466 million people are deaf or have hearing loss. 40% of websites are completely inaccessible to users who are blind. Accessible products avoid alienating these groups and provide a better user experience for everyone.
Stakeholders will probably ask why it’s important to make the investment. Making the case for accessibility to get team buy-in is critical to success.
Design with accessibility in mind
Starting with accessible designs sets the tone for the rest of the product.
Pick colors with high contrast ratios. For users with low vision and colorblindness, high color contrast ratios can make content clearer. Read more about color contrast accessibility.
Use readable, scalable fonts. Users should be able to increase the font size by zooming in and out to more easily read the text. Generally, the body text should be at least 16px.
Don’t depend on visuals or audio alone to convey a message. Provide alternate messaging for blind, low vision, deaf, or hard of hearing users. For example, if a design uses color coding to categorize, consider pairing it with text, a pattern, an icon, etc.
Different types of colorblindness impact the way users see content. Pairing color with a secondary identifier can help users understand the message. Using text also allows a screenreader to pick up the information.
Check your work with tools like Stark
Designers aren’t required to know if a font color has a high enough contrast ratio. Plugins like Stark can be used while designing to check contrast ratio and font size, as well as simulate colorblindness.
Pick two colors to see if the contrast ratio is high enough for low vision users.
Use an existing design system
If you’re at a startup, it’s unlikely you’ll have the resources to build your own design system, let alone one that’s accessible. Leveraging an established design system, like Angular Material or React, helps ensure that every time a component is used, it passes accessibility compliance.
Auditing can be so overwhelming that there are companies that audit products for companies. At a startup, that luxury is unlikely, but luckily there are resources to streamline the process.
Start with an understandable checklist
Web Content Accessibility Guidelines (WCAG), created by World Wide Web Consortium (W3C), is a global standard for web accessibility.
“Web Content Accessibility Guidelines 2.0 explain how to make web content accessible to people with disabilities. Conformance to these guidelines will help make the Web more accessible to users with disabilities and will benefit all users.”
Reading through the WCAG requirements can be tedious. WebAIM, an accessibility non-profit, put together a checklist with recommendations on how to adhere to each guideline.
WebAIM also created a chrome extension that can be turned on for any page. It lists potential accessibility issues and gives suggestions on how to fix them.
Use the WebAIM checklist and WAVE Evaluation tool to fill out an audit. The Voluntary Product Accessibility Template (VPAT) is the most common audit form in the US. The audit provides the status of compliance for the product and helps the team understand how much work is needed.
The final step is the most collaborative. It requires clear communication to make sure everyone’s on the same page.
Review the audit with the team to size the work
Discuss the findings to begin sizing how much work will be necessary. Use the checklist’s recommendations to meet unmet guidelines. This will help the team agree on how and when the work should be done.
Write requirements to chip away at remaining work
Make an epic to “Create an accessible product” to have a bucket for all accessibility work. Take the page-by-page audit and write requirements for each issue. When the work is defined, the goal looks a lot more achievable.
Code with accessibility in mind
Make a plan to maintain an accessible product. Developers need to code with accessibility in mind, just like designers design with accessibility in mind. Use a checklist, just like any other QA checklist, to assure each new feature passes accessibility standards.
Like any other functional requirement, accessibility should be treated as an expectation for each new feature. Enforce that expectation.
Although accessibility is a subsection of inclusive design, I find it’s a great place to start a larger discussion. The standardization and examples around accessibility help teams understand why inclusive design matters.
Priorities change and accessibility is often pushed to the end of the backlog, but tirelessly advocating for accessibility and inclusive design creates the best user experiences for all users.
Don’t let the way your product is built be the reason someone isn’t using it.
The best way to learn something is by doing. This tutorial will help you learn how to build your own todo list app with React hooks and TypeScript. Try this easy tutorial, build your own todo list app, and get better in JavaScript, React and TypeScript.
The goal for this tutorial is to build your own todo list app. About the app in general. This todo list app will have very simple interface and it will focus on the most important features, i.e. create, check off and delete todos. About code. You will use React and React hooks, mostly useState hook.
There will be one occasion where you will also use useRef hook. Since this todo list app will utilize React hooks for managing state there is no need to use class components. So, you will build this app only with functional components. When it comes to styling your todo list app, you will use external CSS stylesheets.
One last things. First every todo item will have a unique id. These ids will be generated when the todo item is created. You will use this id to mark the todo as complete or to remove it. To make this easier, while following good practices and avoiding using indexes, you will use shortid package.
Project setup
As the first thing let’s create the basic app for your todo list app. We can do this very fast with the help of create-react-app. You can use this package with npm init react-app react-hooks-todo-list-app-ts --typescript, npx create-react-app react-hooks-todo-list-app-ts --typescript or yarn create react-app react-hooks-todo-list-app-ts --typescript. If you don’t want to use TypeScript, omit the --typescript flag at the end of the command.
These commands will create starting template for your todo list app, with workflow setup and almost all necessary dependencies. There is one dependency you will need to install manually, the shortid and types for this package. So, use npm i shortid and npm i -D @types/shortid, yarn add shortid and yarn add -D @types/shortid or pnpm i shortid and pnpm i -D @types/shortid.
There are some assets, such as React logo, that came with the app template. You can remove it because you will not need it. A very simple version of your package.json should look similar to this:
As the last thing, below is the final structure of this todo list app project. You can use this as you work on this tutorial to orient yourself. With that, you are ready to start working on your todo list app.
The first thing to do is create interfaces for your todo list app. You will use them to define the shape of component props and the todo object, or to type them. If you decided to use pure JavaScript, instead of TypeScript, you can skip this step. You will need to create four interfaces.
One for todo (todo object), one for todo form one for todo list and one for todo item. The todo object will have three properties, id, text, isCompleted. The TodoForm props contain array of todo objects and handleTodoCreate method. The TodoList props will contain handleTodoUpdate, handleTodoRemove, handleTodoComplete and handleTodoBlur methods and array of todo objects.
The TodoItem props will contain handleTodoUpdate, handleTodoRemove, handleTodoComplete, handleTodoBlur and a single todo object.
The first component you will build will be todo item. When you add new todo on your todo list, this item component will represent it. This component will be composed of a couple of elements. First, there will be a div with span elements for checking off the todo. Unchecked item will contain empty span, styled into a transparent circle with border.
Checked off todo item will contain span with check mark HTML entity, inside a green circle. The wrapper div will have onClick handler to check/uncheck the todo. Next will be another div with input. You will use this input element to render the title, or the text, of the todo. This is the simplest way to make every todo item editable, through input elements.
You will pass the title be done through value attribute, from todo object passed through props. Aside to this, this input will have two handler methods, one for onBlur and one for onChange. The last element will be also a div, now with “x” entity/icon. You will use this element to remove the todo item.
This div will have one onClick handler. As all the previous data, and handler methods, this too will be passed thorough props.
If you use TypeScript, import the TodoItemInterface interface from interfaces.ts and to use it to type props of this component. After this, type the onChange handler on input element with React.ChangeEvent because we are attaching onChange handler to input element.
// Import dependencies
import * as React from 'react'
// Import interfaces
import { TodoItemInterface } from './../interfaces'
// TodoItem component
const TodoItem = (props: TodoItemInterface) => {
return (
The todo list will be the second component you will create. This component will be very simple. This component will accept handler methods for the TodoItem, you’ve just created, and array of todo objects through props. The component itself will contain one div as a wrapper element.
Inside this div will be a list, one ul element. Inside this element, you will use map() to iterate over the array of todo objects, and create one li element with one TodoItem component for every todo object. You will then pass the individually todo objects to the TodoItem component, along with handler methods.
For TypeScript, remember to import TodoListInterface interface and use it to type the props of the TodoList component.
// Import dependencies
import * as React from 'react'
// Import TodoItem
import TodoItem from './todo-item'
// Import interfaces
import { TodoListInterface } from './../interfaces'
// TodoList component
const TodoList = (props: TodoListInterface) => {
return (
{props.todos.map((todo) => (
))}
)
}
export default TodoList
Todo form component
The todo “form” will is the first component where you will use useState React hook. It is also here where you will use the useRef React hook. You will use the useState hook to store the text passed to the input element, text for the todo title before you will create new todo item.
You will use the useRef hook to store reference to this input. The way you create new todo is by pressing “Enter” key, while you type some text inside that input. So, when you press “Enter” key you will use this reference to reset the input, by setting the value to empty string. This input will also have two handler methods for onChange and onKeyPress.
These two handler methods will be handleInputChange and handleInputEnter. The first, for onChange, will update the form state when you write something into the input, some todo title/text. The second, for onKeyPress, will create new todo object and reset the input field when it detect pressing “Enter” key.
Do you remember the shortid package? It is here where you are going to use this dependency. Inside the handleInputEnter function, inside the new todo object, you will use shortid to generate unique id for every new todo. Don’t worry. This will be simple. All you need is to call generate() on shortid and your new id is ready.
Lastly, few things for TypeScript. First, import TodoInterface and TodoFormInterface interfaces. Then, use the TodoInterface interface to type the new todo object inside handleInputEnter, and TodoFormInterface interface to type the props of TodoForm. Then, type the useRef hook, using and set it to null.
After that, there are also two events. For the first one, you can type it with React.ChangeEvent because we are attaching onChange handler to input element. For the second, you can type it with React.KeyboardEvent because we are “listening” for key press.
// Import dependencies
import * as React from 'react'
import shortid from 'shortid'
// Import interfaces
import {TodoInterface, TodoFormInterface} from './../interfaces'
// Todo form component
const TodoForm = (props: TodoFormInterface) => {
// Create ref for form input
const inputRef = React.useRef(null)
// Create form state
const [formState, setFormState] = React.useState('')
// Handle todo input change
function handleInputChange(event: React.ChangeEvent) {
// Update form state with the text from input
setFormState(event.target.value)
}
// Handle 'Enter' in todo input
function handleInputEnter(event: React.KeyboardEvent) {
// Check for 'Enter' key
if (event.key === 'Enter') {
// Prepare new todo object
const newTodo: TodoInterface = {
id: shortid.generate(),
text: formState,
isCompleted: false
}
// Create new todo item
props.handleTodoCreate(newTodo)
// Reset the input field
if (inputRef && inputRef.current) {
inputRef.current.value = ''
}
}
}
return (
You are almost done. There is just one component you need to build. This is the main TodoListApp component. This component will implement methods for creating, updating, removing and completing your todos. This will be done via handleTodoCreate, handleTodoUpdate, handleTodoRemove and handleTodoComplete methods.
It is also this component where you will store all existing todos, using the useState React hook. So, let’s build this component, step by step.
Imports
First, as usually, you will need to import dependencies for react. Now, you will also need to import render method from react-dom. This is because you will render the TodoListApp component, your todo list app, in the DOM.
You will also import TodoForm and TodoList components so you can later return, and render, them. When you import these components you should also import the main external CSS stylesheet, so you can later style your todo list app.
For TypeScript, you will need to import the TodoInterface interface. You will use this interface a couple of times, to type todos state and some method parameters.
// Import dependencies
import * as React from 'react'
import { render } from 'react-dom'
// Import components
import TodoForm from './components/todo-form'
import TodoList from './components/todo-list'
// Import interfaces
import { TodoInterface } from './interfaces'
// Import styles
import './styles/styles.css'
Creating todo list app state
The state fo your todo list app will be simple. It will be an array of objects. One object will represent one existing todo. In the beginning, you will initialize the todos state as an empty array.
For TypeScript, make sure to use the TodoInterface interface along with []. This will tell TypeScript you are “talking” about an array of todos objects, not just one todo object.
The first method for your todo list app will be method to create new todos, handleTodoCreate method. This method will accept one parameter, a todo object. The way it will work is simple. First, it will create new todo list app state, the newTodosState, by copying the current todo list app state.
Next, it will take the todo object, you pass as parameter when you call this method, and add that todo to the new todo list app state, the newTodosState, using push() method. After that, it will update the todo list app state, using setTodos() method.
About TypeScript. You will use the TodoInterface interface to type the todo parameter. You will also use this interface to type the newTodosState variable. In this case, you will again have specify you want an array of todo objects, adding [] after the TodoInterface.
// ....
// Creating new todo item
function handleTodoCreate(todo: TodoInterface) {
// Prepare new todos state
const newTodosState: TodoInterface[] = [...todos]
// Update new todos state
newTodosState.push(todo)
// Update todos state
setTodos(newTodosState)
}
// ....
Updating existing todos
Next, you will need method to update existing todos, handleTodoUpdate method. This method will accept two parameters, event and id. The id will be unique id generated for every todo item/object. Similarly to handleTodoCreate, this method will also start by creating new todo list app state, newTodosState, by copying the current todo list app state.
Next, it will use find() method to iterate over the newTodosState variable and find the correct todo item to update, using the id passed as argument. When it finds the correct todo item/object, it will change the value of its text key. New value will come from the value of the input inside specific todo item.
The last step is updating the todo list app state, using newTodosState and setTodos() method.
For TypeScript, use the TodoInterface interface to type the todo parameter passed to find() method. Use it also for the newTodosState variable, along with [] after the TodoInterface. Lastly, type the id parameter as a string.
// ....
// Update existing todo item
function handleTodoUpdate(event: React.ChangeEvent, id: string) {
// Prepare new todos state
const newTodosState: TodoInterface[] = [...todos]
// Find correct todo item to update
newTodosState.find((todo: TodoInterface) => todo.id === id)!.text = event.target.value
// Update todos state
setTodos(newTodosState)
}
// ....
Removing existing todos
Removing todos will be done using filter() method. First, you will create new todo list app state, newTodosState, by copying the current todo list app state. During this, you will use the filter() method to remove the todo you want to remove. This will be done by comparing id of all todos with the id of todo you want to remove.
When this is done, you will use this new, filtered, state to update the todos state with the setTodos() method.
For TypeScript, use the TodoInterface interface to type the todo parameter passed to filter() method. Then, use it also for the newTodosState variable, along with [] after the TodoInterface. Finally, type the id parameter as a string.
// ....
// Remove existing todo item
function handleTodoRemove(id: string) {
// Prepare new todos state
const newTodosState: TodoInterface[] = todos.filter((todo: TodoInterface) => todo.id !== id)
// Update todos state
setTodos(newTodosState)
}
// ....
Completing todos
The method for completing todos will look very similar to handleTodoUpdate method. First, it will copy the current todo list app state and store it in newTodosState variable. Then, it will use find() method to find specific todo item/object in todos state.
This time, it will negate the value of isCompleted key of the specific todo item/object. After this, it will use the setTodos method to update todos state.
Now, about TypeScript. First, use the TodoInterface interface to type the todo parameter passed to find() method. Next, use this interface also for the newTodosState variable, again with [] after the TodoInterface. The last type will be for the id. This will be a string.
// ....
// Check existing todo item as completed
function handleTodoComplete(id: string) {
// Copy current todos state
const newTodosState: TodoInterface[] = [...todos]
// Find the correct todo item and update its 'isCompleted' key
newTodosState.find((todo: TodoInterface) => todo.id === id)!.isCompleted = !newTodosState.find((todo: TodoInterface) => todo.id === id)!.isCompleted
// Update todos state
setTodos(newTodosState)
}
// ....
Ensuring every todo has title
The last thing. When you edit existing todo there should be some warning if you leave the text/title empty. To get this done you can watch change on input element inside every todo. Then, you can check its value is not an empty string, the length of the value is bigger than “0”.
If there is an empty string, you will add specific CSS class. When you input some text, you will remove that CSS class. This CSS class will mark the input with red border. You will define this class in your CSS stylesheet later.
As usually, the TypeScript. This will be quick. All there is to type is the event passed as parameter. Since you are attaching a onChange event handler on input element, you can use React.ChangeEvent.
// ....
// Check if todo item has title
function handleTodoBlur(event: React.ChangeEvent) {
if (event.target.value.length === 0) {
event.target.classList.add('todo-input-error')
} else {
event.target.classList.remove('todo-input-error')
}
}
// ....
Returning all components
Your todo list app is almost finished. Now, you now need to take all the components you’ve built so far, and imported in component, and return them. Make sure to provide all components with necessary props. After that, you can use the render() method and render the TodoListApp in the DOM.
// ...
return (
{/* Todo form component */}
{/* Todo list component */}
)
}
// Render the App in the DOM
const rootElement = document.getElementById('root')
render(, rootElement)
Putting it all together
You wrote a lot code in this main component. Let’s put it all together to make it more clear.
// Import dependencies
import * as React from 'react'
import { render } from 'react-dom'
// Import components
import TodoForm from './components/todo-form'
import TodoList from './components/todo-list'
// Import interfaces
import { TodoInterface } from './interfaces'
// Import styles
import './styles/styles.css'
// TodoListApp component
const TodoListApp = () => {
const [todos, setTodos] = React.useState([])
// Creating new todo item
function handleTodoCreate(todo: TodoInterface) {
// Prepare new todos state
const newTodosState: TodoInterface[] = [...todos]
// Update new todos state
newTodosState.push(todo)
// Update todos state
setTodos(newTodosState)
}
// Update existing todo item
function handleTodoUpdate(event: React.ChangeEvent, id: string) {
// Prepare new todos state
const newTodosState: TodoInterface[] = [...todos]
// Find correct todo item to update
newTodosState.find((todo: TodoInterface) => todo.id === id)!.text = event.target.value
// Update todos state
setTodos(newTodosState)
}
// Remove existing todo item
function handleTodoRemove(id: string) {
// Prepare new todos state
const newTodosState: TodoInterface[] = todos.filter((todo: TodoInterface) => todo.id !== id)
// Update todos state
setTodos(newTodosState)
}
// Check existing todo item as completed
function handleTodoComplete(id: string) {
// Copy current todos state
const newTodosState: TodoInterface[] = [...todos]
// Find the correct todo item and update its 'isCompleted' key
newTodosState.find((todo: TodoInterface) => todo.id === id)!.isCompleted = !newTodosState.find((todo: TodoInterface) => todo.id === id)!.isCompleted
// Update todos state
setTodos(newTodosState)
}
// Check if todo item has title
function handleTodoBlur(event: React.ChangeEvent) {
if (event.target.value.length === 0) {
event.target.classList.add('todo-input-error')
} else {
event.target.classList.remove('todo-input-error')
}
}
return (
Your todo list app is ready to go. Well, almost. There is a lot of space for some styling. Here are some styles you can use to make your todo list app look better.
Conclusion: How to Build a Todo List App with React Hooks and TypeScript
Congratulations, you’ve just built your own todo list app using React hooks and TypeScript! However, you don’t have to stop here. So, go ahead. Take this todo list app and make it better. Think about what features you would like it to have. Then, don’t wait for anything. Try to implement them by yourself. Have fun!
If you liked this article, then please consider subscribing.
With SMX East right around the corner, it’s a great time to start the conversation that will shape my talk in November – How to Structure Your Facebook Campaigns for Success. This can surely seem a daunting undertaking. There are so many levers to potentially pull, and so much information available on the topic that it can at times be overwhelming for even the most seasoned digital marketer. However, if you’re able to keep these two things in mind, you’ll be well on your way to establishing a sound foundation to scale your social program.
The first step is to simplify your account structure where possible. We all remember not too long ago when the typical Facebook account looked like this:
Multiple campaigns, dozens of ad sets, with you, the advertiser, struggling to identify the little pockets of performance to scale efficiency. Today, most publishers are recommending a more simplified approach, with Facebook (and Google) taking the lead for that recommendation. At my company, we’ve tested extensively into this simplified structure and found that for most of our clients it performs with better conversion rates and lower costs. This is because when using that simplified structure, you’re reducing the likelihood that you’ll have ad sets with significant overlap (which could cause increased costs if multiple ad sets are bidding for the same user in auction). Simplifying and reducing the number of ad sets per campaign also increases the data density for each ad set, which allows the algorithm to optimize more efficiently. When shifting to this more simplified structure, you’ll essentially be paring down all potential for campaign/ad set overlap, so that you go from a structure that looks like the above, to one that more closely resembles this:
The second item is something that I’m sure you’ve heard before but bears repeating – always be testing. With the numerous levers available in-platform, it’s imperative that you know the strategies that will enable stable performance for your evergreen campaigns. I’ve found that developing a roadmap to outline your test ideas, and (most importantly) to record the results, is the most straightforward way to approach this, and can be as simple as creating a G-Sheet. Some of the things to ask yourself as you’re thinking about testing can be:
Do I know the bidding methodology that provides the best results for my business goals?
Do I know the creative that is most engaging and encourages conversion for new customers? Potential customers who have visited my site but haven’t converted? Existing customers?
How is customer lead quality (or AOV for e-commerce) impacted by serving impressions outside of Facebook and Instagram’s Newsfeed?
Do I know the value of recent website visitors compared to older website visitors? Which are the segments of users I should bid up (or down) on?
Adopting an always-on mindset for testing is even more important considering a major change coming to Facebook in early 2020 – the removal of ad set budget control and the shift to Campaign Budget Optimization (CBO). At my company, we’ve tested into this new feature to better prepare our clients for the change, with relatively positive results. For most clients, testing into CBO and enabling real-time budget distribution based on performance results in increased conversion volume, with similar or more efficient costs.
However, because every business is different, there’s no guarantee that CBO will generate more efficiency or increase conversion volume for your account. If you haven’t already, it’s time to put it as a priority on your testing roadmap – that way you’ll be able to go into the New Year confident that you’ve figured out how to make this new feature work for your account.
I hope these insights give you confidence as you’re reviewing your existing Facebook structure, or considering a structure for a brand new account. If you’d like more information or just want to chat, please check out my session at SMX East this upcoming November!
Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.
About The Author
Zenia is an account lead for 3Q Digital, where she develops strategy and manages paid media for clients in a wide range of verticals. While she is knowledgeable in all aspects of digital marketing, her passion is in paid social marketing. She has contributed to Search Engine Land, Marketing Land, and Marin, and has spoken at Janes of Digital, SMX Advanced and SMX East.
At Figma, we recently tackled one of our biggest engineering challenges yet: supporting plugins. Our plugins API enables third-party developers to run code directly inside our browser-based design tool, so teams can adapt Figma to their own workflows. They can enable accessibility checkers to measure contrast, translation apps to convert language, importers to populate designs with content, and anything else their heart desires.
We knew we needed to design this plugin functionality carefully. Throughout the history of software, there have been many instances where third-party extensions negatively affected a platform. In some cases, they slowed the tool to a crawl. In other cases, the plugins broke whenever new versions of the platform were released. To the extent that we could control it, we wanted users to have a better plugin experience with Figma.
Furthermore, we wanted to make sure plugins would be safe for users to run, so we knew we wouldn’t want to simply eval(PLUGIN_CODE). That is the quintessential definition of insecure! Yet, eval is essentially what running a plugin boils down to.
To add to the challenge, Figma is built on a very unconventional stack with constraints that previous tools haven’t had. Our design editor is powered by WebGL and WebAssembly, with some of the user interface implemented in Typescript & React. Multiple people can be editing a file at the same time. We are powered by browser technologies, but also limited by them.
This blog post will walk you through our pursuit for the perfect plugin solution. Ultimately, our efforts boiled down to one question: How do you run plugins with security, stability, and performance? Here’s a brief overview of our many non-trivial constraints:
We considered dozens of different approaches branching into all sorts of different paths. We had weeks of discussing, prototyping and brainstorming. This blog post will focus on just three of those attempts that formed the most central path in our exploration.
For me, it has been the most satisfying exercise in first principles thinking. I had fun leveraging all the computer science fundamentals I learned in the classroom (the ones I never thought I’d ever use in the real world). If that idea excites you too, we’re hiring!
During our first few weeks of research, we found many interesting attempts at third-party code sandboxing. Some used techniques such as code-to-code transformation. However, most had not been validated in a production application and as such, carried some risk.
In the end, for our first attempt, we went for the closest thing to a standard sandboxing solution: the tag. It’s used in applications that need to run third-party code, such as CodePen.
The is not your everyday HTML tag. To understand why s are secure, it’s necessary to think about what properties they need to guarantee. An is typically used to embed one website into another website. For example, in the screenshot below, you can see that yelp.com is embedding google.com/maps to provide map functionality.
Here, you wouldn’t want Yelp to be able to read what’s inside a Google website just by virtue of embedding it. There could be private user information in there. Similarly, you wouldn’t want Google to be able to read what’s inside a Yelp website just by virtue of having been embedded.
This means communication to and from the is heavily restricted by browsers. When an has a different origin than its container (e.g. yelp.com v.s. google.com), they are completely isolated. Then, the only way to communicate with the is via message-passing. These messages are, for all intents and purposes, plain strings. Upon receipt, each website is free to take action on those message, or ignore them.
They’re so isolated, in fact, that the HTML spec allow browsers to implement s as separate processes, if they chose to.
Now that we know how s work, we can implement plugins by creating a new every time a plugin runs, and sticking the plugin’s code inside the . The plugin can do anything it wants inside the . However, it won’t be able to interact with Figma documents except through explicit, whitelisted messages. The is also a given a null origin, which means that any attempt to make a request to figma.com will be denied by the browser’s cross-origin resource sharing policies.
Effectively, the acts as a sandbox for the plugin. Furthermore, the security properties of the sandbox are guaranteed by browser vendors, who have spent years searching for and fixing any vulnerability in the sandbox.
An actual plugin using this sandbox model would use an API that we add into the sandbox and look roughly like this:
const scene =await figma.loadScene()// gets data from the main thread
scene.selection[0].width *=2
scene.createNode({type:'RECTANGLE',
x:10, y:20,...})await figma.updateScene()// flush changes back, to the main thread
The key thing is that plugins initialize by calling loadScene (which sends a message to Figma to get a copy of the document) and finish by calling updateScene (which sends the changes made by the plugin back to Figma). Note that:
We get a copy of the document instead of using message passing for every read and write to a property. Message-passing has overhead on the order of 0.1ms per round-trip, which would only allow for ~1000 messages per second.
We don’t make plugins use postMessage directly, since it would be cumbersome to work with.
We went with this approach and built it for about a month. We even invited some alpha testers. However, it soon became clear that there were two major flaws with this approach.
Problem #1: async/await is not user friendly
The first feedback we got is that people were having trouble with having to use async/await. In this approach, it is unavoidable. Message-passing is fundamentally an asynchronous operation, and there’s no way in JavaScript to make a synchronous, blocking call to an asynchronous operation. At the very least, you need the await keyword which also requires marking all calling functions async. All things considered, async/await is still a fairly new JavaScript feature that requires some non-trivial understanding of concurrency. This is a problem, because we expect many of our plugin developers to be designers who may be comfortable with JavaScript, but may not have formal CS education.
Now, if it were only necessary to use await once at the beginning of the plugin and once at the end, it wouldn’t be so bad. We would just tell developers to always use await with loadScene and updateScene even if they don’t quite understand what it does.
The problem is that some API calls require a lot of complex logic to run. Changing a single property on a layer sometimes cause multiple layers to update. For example, resizing a frame can recursively apply constraints to its children.
These behaviors are often complex and nuanced algorithms. It would be a bad idea to re-implement them again for plugins. That logic is also in our compiled WebAssembly binary, so it’s not easy to reuse. And if we don’t run that logic inside the plugin sandbox, plugins will be reading stale data.
The second problem with the approach is that it requires serializing large parts of the document before sending them to the plugin.
It turns out that people can create very, very large documents in Figma to the point of hitting memory limits. For example, on Microsoft’s design systems file (which we spent a month optimizing last year), it took 14 seconds just to serialize the document and send it to the plugin, before the plugin could even run. Given that most plugins are going to involve quick actions like “swap two items in my selection”, this would make plugins unusable.
Loading the data incrementally or lazily also isn’t really an option, because:
It would involve months of re-architecting the core product.
Any API that may need to wait on a piece of data that hasn’t arrived yet will now be asynchronous.
In summary, because Figma documents can have really large amount of data with a lot of interdependencies, the wasn’t going to work for us.
With the approach having been ruled out, we had to backtrack in our research.
We went back to the drawing board and spent two long weeks discussing a variety of approaches. As the simple solution didn’t work out, we had to give serious consideration to more exotic ideas. There were many — too many to fill the margins of this blog post.
But most approaches had one or more major disqualifying flaws:
Have an API that would be too difficult to use (e.g. accessing the document using a REST API or GraphQL like method)
Depends on browser features that browser vendors have removed or are trying to (e.g. synchronous xhr service worker, shared buffers)
Requires significant research work or re-architecting of our application that could take months before we can even validate that it can work (e.g. load a copy of Figma in an iframe sync via CRDTs, hack green threads into JavaScript with generators by cross-compiling)
At the end of the day, we concluded that we had to find a way to create a model where plugins can directly manipulate the document. Writing a plugin should feel like a designer automating their actions. So we knew we’d have to allow plugins to run on the main thread.
Implications of running on the main thread
Before we dive into Attempt #2, we need to take a step back and re-examine what it means to allow plugins to run on the main thread. After all, we didn’t consider it at first because we knew that it could be dangerous. Running on the main thread sounds an awful lot like eval(UNSAFE_CODE).
The benefits of running on the main thread are that plugins can:
Directly edit the document rather than a copy of it, eliminating loading time issues.
Run our complex component updating and constraints logic without needing to have two copies of that code.
Make synchronous API calls in situations where you’d expect a synchronous API. There would be no confusion with loading or flushing updates.
Be written in a more intuitive way: plugins are just automating actions that the user would otherwise do manually using our UI.
However, now we have these problems:
Plugins can hang, and there is no way to interrupt a plugin.
Plugins can make network requests as figma.com.
Plugins can access and modify global state. This includes modifying our UI, creating dependencies on internal application state outside the API, or doing downright malicious things like changing the value of ({}).__proto__ which poisons every new and existing JavaScript object.
We decided that we could drop the requirement for (1). When plugins freeze, it affects the perceived stability of Figma. However, our plugin model works such that they are only ever run on explicit user action. By changing the UI when a plugin runs, freezes would always be attributed to the plugin. It also means that it is not possible for a plugin to “break” a document.
What does it mean for eval to be dangerous?
To deal with the issue of plugins being able to make network requests and access global state, we must first understand exactly what it means that “eval arbitrary JavaScript code is dangerous”.
If a variant of JavaScript, let’s call it SimpleScript, had only the ability to do arithmetic such 7 * 24 * 60 * 60, it would be quite safe to eval.
You can add some features to SimpleScript like variable assignment and if statements to make it more like a programming language, and it would still be very safe. At the end of the day, it still essentially boils down to doing arithmetic. Add function evaluation, and now you have lambda calculus and Turing completeness.
In other words, JavaScript doesn’t have to be dangerous. In its most reductionist form, it’s merely an extended way of doing arithmetics. What is dangerous is when it has access to input & output. This includes network access, DOM access, etc. It’s Browser APIs that are dangerous.
And APIs are all global variables. So hide the global variables!
Hiding the global variables
Now, hiding the global variables sounds good in theory, but it’s difficult to create secure implementations by merely “hiding” them. You might consider, for example, removing all properties on the window object, or setting them to null, but the code could still get access to global values such as ({}).constructor. It would be very challenging to find all the possible ways in which some global value might leak.
Rather, we need some stronger form of sandboxing where those global values never existed in the first place.
In other words, JavaScript doesn’t have to be dangerous.
Consider the previous example of a hypothetical SimpleScript that only supports arithmetic. It’s a straightforward CS 101 exercise to write an arithmetic evaluation program. In any reasonable implementation of this program, SimpleScript would simply be unable to do anything other than arithmetic.
Now, expand SimpleScript to support more language features until it becomes JavaScript, and this program is called an interpreter, which is how JavaScript, a dynamic interpreted language, is run.
Implementing JavaScript is too much work for a small startup like ours. Instead, to validate this approach, we took Duktape, a lightweight JavaScript interpreter written in C and compiled it to WebAssembly.
To confirm that it works, we ran test262 on it, the standard JavaScript test suite. It passes all ES5 tests except for a few unimportant test failures. To run plugin code with Duktape, we would call the eval function of the compiled interpreter.
What are the properties of this approach?
This interpreter runs in the main thread, which means we can create a main-thread based API.
It’s secure in a way that’s easy to reason about. Duktape does not support any browser APIs — and that’s a feature! Furthermore, it runs as WebAssembly which itself is a sandboxed environment that has no access to browser APIs. In other words, plugin code can communicate with the outside world only through explicit whitelisted APIs by default.
It’s slower than regular JavaScript since this interpreter is not a JIT, but that’s ok.
It requires the browser to compile a medium-size WASM binary, which has some cost.
Browser debugging tools don’t work by default, but we spent a day implementing a console for the interpreter to validate that it’d be at least possible to debug plugins.
Duktape only supports ES5, but it’s already common practice in the web community to cross-compile newer JavaScript versions using tools such as Babel.
(Aside: a few months later, Fabrice Bellard released QuickJS which supports ES6 natively.)
Now, compiling a JavaScript interpreter! Depending on your inclinations or aesthetics as a programmer, you might either think:
THIS IS AWESOME! ?
or
…really? A JavaScript engine in a browser that already has a JavaScript engine? ?. What next, an operating system in a browser?
And some amount of suspicion is healthy! It is best to avoid re-implementing the browser unless we absolutely have to. We already spent a lot of effort implementing an entire rendering system. It was necessary for performance and cross-browser support and are glad we did it, but we still try to not re-invent the wheel.
This is not the approach we ended up going with. There’s an even better approach. However, it was important to cover as a step towards understanding our final sandboxing model which is more complicated.
While we had a promising approach compiling a JS interpreter, there was one more tool to look at. We found a technology called the Realms shim created by the folks at Agoric.
This technology describes creating a sandbox and supporting plugins as a potential use case. A promising description! The Realms API looks roughly like this:
let g = window;// outer globallet r =newRealm();// realm objectlet f = r.evaluate("(function() { return 17 })");f()===17// true
Reflect.getPrototypeOf(f)=== g.Function.prototype // false
Reflect.getPrototypeOf(f)=== r.global.Function.prototype // true
This technology can actually be implemented using existing, albeit lesser known JavaScript features. One aspect of the sandbox is to hide the globals. The core of how this shim does that looks roughly like this:
This is a simplified version for presentation purposes; there are a few more nuances in the real version. However, it showcases key pieces of the puzzle: the with statement and the Proxy object.
The with (obj) statement creates a scope where variable lookups can be resolved using the properties of obj. In this example, we can resolve the variables PI, cos and sin as properties of the Math object. On the other hand, console is not a property of Math and gets resolved from the global scope.
with(Math){
a =PI* r * r
x = r *cos(PI)
y = r *sin(PI)console.log(x, y)}
Proxy objects are the most dynamic form of JavaScript objects.
The most basic JavaScript object returns a value on a property access obj.x.
The more advanced JavaScript object can have getter properties that returns the result of evaluating a function. The access obj.x calls the getter for x.
Proxies by running a function get to evaluate any property accesses.
The proxy below (also simplified for presentation purposes) will return undefined when trying to access any property on it, other than the ones in the object whitelist.
Now, when you give this proxy as the argument of the with object, it will capture all variable resolutions and never use the global scope to resolve a variable:
Well, almost. It’s still possible to access certain globals via expressions such as ({}).constructor. Furthermore, the sandbox does need access to some globals. For example, Object is a global, often used in legitimate JavaScript code (e.g. Object.keys).
To give plugins access to these globals without messing up the window, the Realms sandbox instantiates a new copy of all these globals by creating a same-origin iframe. This iframe is not used as a sandbox the way we did in our first attempt. Same-origin iframes are not subject to CORS restrictions.
Instead, when an is created in the same origin as the parent document:
It comes with a separate copy of all the globals, e.g. Object.prototype
These globals can be accessed from the parent document.
These globals are put into the Proxy object’s whitelist, so that the plugin has access to them. Finally, this new comes with a new copy of the eval function that differs from the existing one in an important way: even built-in values that are only accessible by syntax such as ({}).constructor will also resolve to a copy from the iframe.
This sandboxing approach using Realms has many nice properties:
It runs on the main thread.
It is fast, because it still uses the browser’s JavaScript JIT to evaluate the code.
Browser developer tools work
But one question remains. Is it secure?
Implementing the API using Realms securely
We feel good about the sandboxing capabilities of Realms. Although it involves more subtleties than the JavaScript interpreter approach, it still works as a whitelist rather than a blacklist, which keeps its implementation small and auditable. It is created by respected members of the web community.
However, using Realms isn’t the end of the story, since it’s simply a sandbox in which plugins wouldn’t be able to do anything. We still need to implement APIs that the plugin can use. Those APIs will need to be secure too, as most plugins do need to be able to show some UI and make network requests to be useful (e.g. populate a design with data from Google Sheets).
Consider, for example, that the sandbox does not contain a console object by default. After all, console is a browser API, not a JavaScript feature. It is possible to pass it as a global to the sandbox.
realm.evaluate(USER_CODE,{ log:console.log })
Or perhaps hide the original value inside a function so that the sandbox can’t modify it:
This, unfortunately, is a security hole. Even in the second example, the anonymous function was created outside the realm, but is given directly to the realm. This means that the plugin could reach outside the sandbox by navigating through the log function’s prototype chain.
The correct way to implement console.log is to wrap it in a function created inside the realm. A simplified example is shown here (in practice, it’s also necessary to convert any exceptions thrown between realms).
// Create a factory function in the target realm. // The factory return a new function holding a closure.const safeLogFactory = realm.evaluate(`
(function safeLogFactory(unsafeLog) {
return function safeLog(...args) {
unsafeLog(...args);
}
})
`);// Create a safe functionconst safeLog =safeLogFactory(console.log);// Test it, abort if unsafeconst outerIntrinsics = safeLog instanceofFunction;const innerIntrinsics = realm.evaluate(`log instanceof Function`,{ log: safeLog });if(outerIntrinsics ||!innerIntrinsics)thrownewTypeError();// Use it
realm.evaluate(`log("Hello outside world!")`,{ log: safeLog });
In general, the sandbox should never have direct access to an object created outside the sandbox as it could get access to global scope. It’s also important that the API is careful about manipulating an object from inside the sandbox, as it risks getting mixed with objects outside the sandbox.
This poses a problem. While it is possible to build a secure API, it is not tenable to have our developers worry about subtle object origin semantics every time they want to add a new function to the API. So how can we fix that?
The problem is that building the Figma API directly on top of Realms makes it so that each API endpoint needs to be audited, including its input and output values. The surface area created is too large.
Despite the fact that code inside the Realms sandbox runs using the same JavaScript engine (and gives us convenient tooling benefits), it still helps to pretend that we live under the restrictions of the WebAssembly approach.
Consider the Duktape, the JavaScript interpreter compiled to WebAssembly from Attempt #2. It’s not possible for main thread JavaScript code to directly hold a reference to an object inside the sandbox. After all, inside the sandbox, WebAssembly manages its own heap and all JavaScript objects are just segments in this heap. In fact, Duktape might not even implement JavaScript objects using the same memory representation as the browser engine!
As a result, implementing an API for a Duktape would only be possible via low-level operations such as copying integers and strings in and out of the virtual machine. It would be possible to hold a reference to an object or function inside the interpreter, but only as an opaque handle.
Note that this is the interface the API implementation would use, but it maps more or less 1:1 to Duktape’s interpreter API. After all, Duktape (and similar virtual machines) were built precisely to be embedded and to allow the embedder to communicate with Duktape.
Using this interface, the object {x: 10, y: 10} could be passed to the sandbox as such:
This low-level interface can be implemented equally well with the Realms sandbox. Such an implementation is a relatively small amount of code (~500 LOC in our case). This small amount of code then needs to be audited carefully. However, once that is done, future APIs can be built on top of this interface without worrying about sandbox-related security issues. In literature, this is called the membrane pattern.
Essentially, this treats both the JavaScript interpreter and the Realms sandbox as “some separate environment in which JavaScript runs.”
There’s one more key aspect of creating a low-level abstraction over the sandbox. While we are confident about the security of Realms, it never hurts to be extra careful when it comes to security. We recognize the possibility that Realms could have an undiscovered vulnerability that will one day turn into an issue we need to deal with. But this is why we had a few paragraphs talking about compiling an interpreter that we don’t even use. Because the API is implemented with an interface whose implementation can be interchanged, using an interpreter remains a valid backup plan that we could use without reimplementing any APIs or breaking any existing plugins.
We now have a sandbox that can run arbitrary plugins safely, and an API that allows these plugins to manipulate Figma documents. This already opens up a lot of possibilities.
However, the original problem we were trying to solve was to build a plugin system for a design tool. To be useful, most of these plugins will want the ability to create a user interface, and many will want some form of network access. More generally, we would like plugins to be able to leverage as much of the browser and JavaScript ecosystem as possible.
We could expose safe, restricted versions of browser APIs one at a time, carefully, just like in the example of console.log above. However, browser APIs (and the DOM in particular) is a huge surface area, larger than even JavaScript itself. Such an attempt would likely either be too restrictive to be useful or it could have a security flaw.
We solve this problem by reintroducing, yet again, the null-origin . Plugins can create an (which we show inside the Figma editor as a modal) and put any arbitrary HTML and Javascript in it.
The difference with our original attempt at using s is that now, the plugin is made of two components:
The part that can access the Figma document and runs on the main thread, inside a Realms sandbox.
The part that can access browser APIs and runs inside an .
These two parts of can communicate with each other via message passing. This architecture makes using browser APIs a little more tedious than if we could run both parts in the same environment. However, it is the best we are able to do given current browser technologies, and it has not prevented developers from creating amazing plugins in the two months since we opened the beta.
We may have taken a winding journey to get here, but ultimately we were excited to find a viable solution for implementing plugins in Figma. The Realm shim allowed us to isolate third-party code while still letting it run in a familiar browser-like environment.
While this is the best solution for us, it may not be the right approach for every company or platform. If you need to isolate third-party code, it’s worth evaluating if you have the same performance and API ergonomic concerns as we did. If not, isolating code via iframes may be sufficient, and simple is always good. We would have liked to stay simple!
Ultimately we cared deeply about the final user experience — both that users of plugins would find them stable and reliable, and that developers with basic Javascript knowledge would be able to build them. Fulfilling those accessibility and quality values made all the time we spent trying to get this right worth it.
One of the most exciting things about working on the engineering team for a browser-based design tool is that we encounter a lot of uncharted territory, and we get to create new ways of handling technical challenges like this. If these kind of engineering adventures sound like fun to you, check out the rest of our blog for more. Or join us, we’re hiring!
Open source is a dream and a gateway to an amazing career — I’m a testament to that. One of the most amazing open source projects to ever exist, Mozilla Firefox, is a project I’m proud to work on as an employee of Mozilla. It’s rewarding, challenging, and a unique experience…and I want you to join me!
Take a few moments with me to learn how to build the amazing Firefox!
Step 1: Clone Mozilla Central
Mozilla Central is the name of the mercurial repository that contains the source code of Firefox. Start by installing mercurial and cloning “MC”:
hg clone https://hg.mozilla.org/mozilla-central/
hg clone {repo_url} is the first and most basic mercurial command, but check out my Mercurial Productivity Tips post to learn more hg commands! “MC” will be installed in a mozilla-central directory.
Step 2: Install Dependencies
From C to Rust, Firefox has a fair bit of requirements you’ll need to install. From within the mozilla-central directory, run the following:
./mach bootstrap
The bootstrap command will install dependencies as well as configure mercurial extensions as required. Congratulations — you’re now ready to build!
Step 3: Create a mozconfig
You’ll thank me for this one! Create a mozconfig file to use artifact builds, which will save loads of time during the build process by downloading pre-built binaries for Firefox’s internals.
# Automatically download and use compiled C components:
ac_add_options --enable-artifact-builds
# Write build artifacts to:
mk_add_options MOZ_OBJDIR=./objdir-frontend
Place the code above in your mozconfig file and you’re builds will be super fast!
Step 4: Build!
Once you have the code and the dependencies, it’s time to build the amazing Firefox! You can build Firefox with the following command:
./mach build
Hello Firefox!
Step 5: Run Firefox
Once you’ve built the amazing Firefox, you can run Firefox with the following mach command:
./mach run --jsdebugger
Congratulations! You’ve taken the Firefox source code and turned it into an application that you can run! The --jsdebugger option opens the “browser toolbox” which allows you to debug the Firefox you’ve just built.
Updating Mozilla Central Code
The Mozilla Central repository is updated several times a day as Mozilla employees and contributors like you submit patches. You can update your local checkout with the following:
hg pull && hg update --clean
You local commits will always be draft status while patches which have been merged into MC will be public status.
Configuring Options
Whenever Firefox developer and the community develop new features, they’re hidden behind a preference until the feature reaches maturity. To toggle feature flags, you can visit about:config in your local Firefox or any Firefox release.
The config page allows you to toggle and set a number of options. Many “edge” features debut behind feature flags in Firefox Nightly.
Congratulations! You’ve turned a massive repository of code into a living, breathing Firefox web browser! In the next post in the Firefox series, we’ll contribution and testing!