Contributor and SMX speaker, Brooke Osmundson, explains how machine learning is changing our account work and why we need to be smarter about layering our campaign assets in 2020.
Below is the video transcript:
Hi, my name is Brooke Osmundson and I am the associate director of research for NordicClick Interactive. And today I want to talk about the top things that marketers should focus on the most for 2020.
The first thing I want to talk about is machine learning. It’s no secret that it is part of our lives right now, and part of your jobs. But what I’m thinking is going to happen is, it’s going to shift your focus on what you’re doing day-to-day within your accounts. We’re going to see less tactical pieces that we have to focus our time on. And it’s really going to help you be more strategic in your account. So with machine learning, what data can give you, what can it do for you to free up more of your time to start thinking more bigger picture and focus on those bigger picture decisions.
The second piece I want to talk about is audience layering on top of your campaigns with the differences in search or match types. You know they’re kind of not a thing anymore, so we’ve got to be smarter about layering on the assets that we have available to us in our campaigns in order to really reach our right customer based on what we know about them.
More predictions for 2020
Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.
About The Author
Brooke Osmundson serves as the associate director of paid search at NordicClick Interactive with over six years’ experience. She helps her clients grow their digital strategies using tactics from paid search, social media and programmatic marketing. With her experience and passion in analytics, strategic planning and everything digital, she helps create relevant customer experience strategies at every stage of the user funnel. Brooke has been featured in the Journal of Digital & Social Media Marketing, spoke at SMX West and PubCon Pro, contributed to NordicClick’s 2018 US Search Award for “Best Use of Video in a Search Campaign” and most recently had her client strategy shortlisted for two 2019 US Search Awards. When not working, you can find her enjoying a round of golf, catching up on the latest episodes of Law & Order SVU or completing never-ending house renovations.
A few years ago I didn’t know what the term “accessibility” meant. I built Web sites that were partially inaccessible because I didn’t know better. Fast forward to today, I know enough to be able to write, speak and run workshops on accessibility, helping others build more a accessible and inclusive Web. Much like everyone else in our field, I am still learning. But since I started, I learned a lot of valuable lessons and core values that drive my work today. Here are a few of them.
Semantic HTML is the foundation of a truly accessible Web.
Semantic HTML is the universal language that all devices accessing the internet understand. It is the language you use to communicate your content to these various devices, including but not limited to browsers, reading apps, screen readers, smart watches, and more.
HTML is semantic, or in other words, it is descriptive and provides meaning — each HTML element describes the type of content it presents. So if you have a heading, you use a heading element. If you have a paragraph, you use a
tag. In other words, it means using the correct HTML elements for their correct purpose.
By using correct elements, your document content will have conveyable structure and meaning.
Structure is important because it helps interoperability. Interoperability is the ability of different systems, devices, applications or products to connect and communicate in a coordinated way, without effort from the end user. In other words, it allows more devices to interpret and access your content, including devices that will show up in the future.
Structure helps applications like reading apps and reader modes (such as Safari’s reader mode) as well as environments like Windows High Contrast Mode understand your content and style it in ways that improve the user experience. This is only possible when the proper HTML semantic elements are used, such as
, , , among many others available in HTML. These elements describe the type of content they contain. Without them, these applications wouldn’t be able to tell what that content is, and therefore won’t be able to style it properly. This increases the risk of making the content less accessible, if not completely inaccessible.
Structure is also important because it allows your users to navigate your content more efficiently. Screen reader users rely on proper document structure to jump to areas of the page they need more quickly. They do that using various quick/hot keys — power commands but for screen reader users. If you don’t use proper landmarks (exposed to screen readers via semantic HTML elements like
This is the Müller-Lyer illusion. You’ve probably seen it before: it consists of two lines, each with forked ends. The middle portion of the top line looks longer than the middle portion of the bottom line. However, when you measure the length of each middle portion, they are the exact same length.
There are many illusions like this.
Are the shapes in this image moving?
Is there a spiral in this image?
Are there gray dots at the intersections of this grid?
While these pictures are fun to look at, after seeing the first few, you probably aren’t fooled. You’ve learned how to see optical illusions: Even though you see moving shapes, different-sized lines, spirals, and gray dots, you know that none of those things are really there.
The moon illusion
The previous examples are all pretty contrived. They were designed to take advantage of how your brain perceives light and shadow. But there are other, more subtle examples that you experience every day.
Next time you’re outside at night, take a look at the moon. How big does it look? If the moon is near the horizon, it will look relatively large and close. When it’s high in the night sky, it will look small and far away.
But no matter where it is in the sky, the moon is always the same apparent size — if you held a ruler at arms length, it would measure the same throughout the night! You don’t even need ruler to check: take a sheet of paper and roll it up into a narrow tube. Point it at the rising moon and adjust the tube’s size until it’s a little larger than the moon’s diameter. Tape the tube so it stays the same size and look at the moon again a few hours later. You’ll see that it fills the same space.
Today, we have satellites, sophisticated ground-based telescopes, and human expeditions to space to measure the moon’s size. But early astronomers like Copernicus and Newton didn’t have that technology. They had to rely on the naked eye to make accurate estimations of the moon’s size. They used these simple observations to predict the moon’s orbit; learning how to see was a major breakthrough.
If Copernicus and Newton believed their intuition — that the moon’s apparent size changes throughout the night — they would conclude that the moon is closer to the earth at the horizon, and farther at its zenith (the highest point of its arc). Its path through space would be an ellipse:
However, the moon’s apparent size doesn’t change. This means that the moon is nearly the same distance from the earth throughout its arc. It’s orbit is closer to a circle:
Knowing that the lunar orbit was circular (and not elliptical, as it first appears) was the key to understanding how the moon interacted with the Earth’s oceans to create the tides. After learning how to see the moon’s orbit, scientists could make accurate forecasts of high and low tide (useful for shipping goods) and create accurate calendars (useful for planning military maneuvers).
How designers see
These phenomena show the difference between merely seeing things and knowing how to see things. There are lots of things that you see every day that you merely see; there are some things you’ve learned how to see.
This is how professional creative workers (digital product designers like me, for example) do our jobs. We don’t have a mystical sixth sense, some X-ray vision that allows us to access hidden insights and information. Through thousands of hours of experimentation and study, we’ve learned how to see the kinds of things we are asked to design.
Take websites, for example. The average person sees nearly 100 pages every day. Despite such familiarity, the average person hasn’t learned how to see websites. Their impressions and beliefs of web design are susceptible to bias, such as banner blindness. Banner blindness is the phenomenon where website visitors subconsciously ignore content in certain positions (at the very top or in the rightmost column) or sizes (wide rectangles) because they resemble advertisements.
My experience as the design director of wsj.com taught me how to see websites through the lens of banner blindness. Advertisers often wanted their ad to be placed in a prominent position at the very top of the homepage, assuming that would get the most attention. However, I knew that other, smaller placements further down the page performed better. One treatment in particular, called a membership tile, was our best-performing ad day after day; it was placed near other articles to avoid the banner blindness effect.
Despite my knowledge and experience, and even armed with the data to back up my claims, it was still difficult to convince stakeholders of banner blindness. It was like telling someone that the moon is the same apparent size throughout the night.
How to learn how to see
Knowing how to see is a valuable but overlooked skill. No matter what it is you do for a living, learning how to see will improve your ability to make judgements and do impactful work. And it’s really quite simple:
Ask the stupid questions.
What we initially think are stupid questions often turn out to be informative and worthwhile. Asking them gives us the opportunity to explore and learn from conventional wisdom, and helps us take nothing for granted.
For example: why are links on the internet usually blue?
Seek multiple explanations.
It’s tempting to take the first plausible explanation of an observation as the truth. But don’t be content with a single story. By finding other explanations, you’ll either strengthen your initial understanding or discover an even better line of reasoning.
Maybe links are blue because an engineer creating the first web browser thought it should be blue, and that was enough to create a standard. But maybe they’re blue because decades of research has shown that blue is the color that our brains can locate fastest on a colorful page.
Challenge your own assumptions.
The biggest difference between merely seeing from knowing how to see is understanding that our brains can (and often do) deceive us. That means that our assumptions are wrong much of the time. Being willing to be wrong — that is, being humble about our beliefs — makes it easier to replace seeing with knowing.
Originally, links were blue simply because the early developers of browsers thought it looked good. But accessibility research has shown that blue is, in fact, a good color for links. Joe Clark explains in Building Accessible Websites:
“Red and green are the colors most affected by color-vision deficiency. Almost no one has a blue deficiency. Accordingly, nearly everyone can see blue, or, more accurately, almost everyone can distinguish blue as a color different from others.”
Links on the internet have remained blue by default, but most designers and developers aren’t just following tradition. They understand the important accessibility concerns of technology, and constantly revisit defaults to create the best experience for the most users.
We spend most of our day perceiving taking in far more information than we can reasonably process without some shortcuts. For the majority of what we see, it’s ok to just see, to take things at their face value. But take time to learn how to see some of what you experience. You’ll be able to make better decisions, share deeper insights, and overall, enrich your understanding of the world around you.
Use competitive usability tests to improve your own product.
You don’t have to copy your competitors, or ignore them instead. By using two user testing methods explained in this article, you can see what your competitors are doing right and wrong instantly to start learning from them.
When I started my career in UX, people were always telling me how they liked this function at service X that they had used or seen. Then they asked me to copy it. That made me feel bad, because it was basically like stealing work from other people.
Other people were telling me that I should never even consider copying functions from existing services, because the fact that it worked for them did not mean that it would work for me. But that also felt wrong, because if someone else had already spent time inventing the wheel, why would I try to do it all over again?
For a long time I didn’t know what to do with this. I copied some functions other people had already made, trying to see if that would work for me. I also tried to come up with new stuff that no one had ever done before, trying and see if that would work.
That was until I started to do competitive usability tests. And it worked really well. I had already done my fair share of regular usability tests. So I knew how to come up with a hypothesis, make a user scenario, set up a user test and collect results to analyze and report. Doing this with competitors was easy.
One thing that is really important to know when it comes to testing with your competitors is that you should be collecting comparitive data. You’re always collecting data when you are doing a usability test, but this time you need to be able to compare the data from a competitor with the data of your own service.
When I started testing competitors, I had the luck that my colleagues pointed me in the right direction. We had mapped main functions in the website of our client and two important competitors as a benchmark, and we were wondering what these functions were worth. But we also wanted to have some statistics that we could show to our client in order to convince them.
So in the usability test, I asked users if they could try and find these functions in the websites of our client and their competitors. For every user that could find a function on a website I would add a point to that website. After five tests, I could calculate the exact scores for every website, and compare them to each other.
This technique is called a traffic light usability test. There are a number of ways to use these kind of usability tests. You can also look at how long it takes a user to perform an action for example, to measure and compare how hard it is for a user.
So far I have tried two kinds of competitive usability tests, and they both worked really well for me. I will explain for each of them how they work and when you would use them to do research on your product or service.
The first test is used for when you need to find out how your product is performing compared to its competitors. You want to know how far ahead or behind you are compared to other products, or you want to know where your focus should be. I had the following hypothesis in my case.
My marketing website about my kitchen store has a lot of functionality compared to the websites of my competitors, but a lot of users still choose their services over mine. I think their functions are easier to find compared to the functions on my website.
To test this hypothesis, we began with a list of all the functionalities on the website of our client and those of two competitors. We didn’t think it would matter if there were functions on our list that were not on the website of our client. If they were on the website of competitors, we still wanted to know if users could find them and why.
Next, we chose two competitors that were most relevant to our client. There were actually a lot more competitors, but we couldn’t make users look at all of them, that would simply have taken too long.
We also set up screen-recording and eye-tracking.
We invited five users for our tests. We explained to them that they were going to get a list with functions that they needed to find on three different websites.
Because we didn’t care about how long it took them to find a function, they did not have to follow any particular order. They could just browse the website like they would normally do, and tick off any functionality on the list they came across. While they were browsing, we asked if they could tell us what they were doing and why.
After looking at each website, we made sure our users checked the list one more time to see if they didn’t forget any functions. We recorded their answers digitally, so we could easily collect the results in a spreadsheet and calculate the scores.
We looked back at our eye-tracking and recording data to see what users did. Often, users told us that they found a certain functionality while they found something else. Or they told us that they did not see a function, but their eyes actually went over it. In that case it probably just didn’t seem interesting to them at the time.
Because we had recordings we could rewatch the tests later to see if users checked off any functionalities incorrectly. Knowing about this was vital information because it told us something about why they could not find certain functions.
In the end, we had three scores for every website that we wanted to compare. We knew which website performed best, which functions were easy to find, and which were not. But most importantly, we knew why functions were hard to find because we asked the right questions and analyzed our recordings.
We were able to confirm that despite our clients website having more functionality, the functionality on the websites of competitors was easier to find. But more importantly, we were able to tell why those functions were hard to find.
We discovered that functionality was scattered across the website of our client, while competitors offered their funcionality in organised flows, resembling the way people find inspiration for their new kitchen in real life.
We even discovered that men and women prefer different flows when it comes to buying a new kitchen. While men want to start planning and visualising, women are more likely to look for inspiration and respond to personal style.
The second test is for when your competitors are offering a functionality that your website does not have, and you want to know if your users could benefit from this functionality on your website. In my example, where we wanted to check if product reviews would be relevant to our users, I had the following hypothesis.
My pet food webshop does not enable users to leave reviews and read reviews of other people, but other pet food webshops do offer this functionality to their users. I think my users will also be interested in this functionality.
Normally when you would like to test a new function, you would need to build a design or a prototype that users can see and use before they can tell you anything insightful. But when your competitors already have similar functionality that you can access, you can just use that to show to your users in a usability test.
In my case our client gave us a list with competitors that offered similar webshops. We chose two competitors that had the product reviews we wanted to test, and where those product reviews worked well in our experience.
I also set up screen recording and eye-tracking.
I asked five users to take a look at product pages on each website, and decide whether they would be interested in buying the products offered on those pages. I did not explicitly tell them to look at the reviews, because I wanted to know if they would focus on the reviews by themselves.
For comparison, I also added a product page at the webshop of our client for users to look at, but without the product reviews. That way I could see how users would react if there were no product reviews, in contrast to the product pages with reviews.
During each test, I would ask users what they were looking at, why they were looking at it, and how it influenced their choice in deciding to buy a product. After each test, I would only ask users to fill out one quesion: Would they be interested in buying the product, and why.
Often, their choice didn’t stroke with what their behavior showed according to the scores, eye tracking and recordings. But when they were asked to choose, they opened up and started telling more because they thought they were given control of the research.
Tricking your users like this can bring you insights that you wouldn’t have been able to expect. This way you can find out what else you can look into for new research. Or you can come up with new unexpected concepts you can now also test.
When I was done testing, I could see on which pages my users most likely wanted to buy a product. By looking back at the questions I had asked from my notes, the sceen recordings and eye-tracking data, I could see that users almost always looked at the reviews if they were present.
I found out that most of the time users didn’t base their choice on the product reviews primarily. But they still looked at the reviews for a number of users. They wanted to see whether their own choice would be confirmed by other users, or whether there would be positive reviews from users who had a pet animal similar to theirs.
But sometimes my users would tell me they didn’t think reviews were relevant to look at, because they had already been advised by their vetirinarian to buy a specific product for a disease of their pet. In the test however, those users still looked at the reviews because they found the reviews entertaining to read.
The users also told me what they liked about the presentation of the reviews on the product pages. Some aspects made things more clear to them, while others were disturbing to them.
So after the test, I could advice our client not only about the relevance of product reviews to them, but also about how to implement this feature. Even though I hadn’t designed or prototyped anything myself.
When I started my career in UX, I didn’t know about competitive usability tests, or even about usability testing in general. Because of that, I also didn’t want to feel bad because of looking at competive products and copying their functionality.
But now that I do know about these techniques, I know how to learn from competitive websites and I have found a way to tell clients exactly what they should and should not copy.
I hope that you, after reading this article, also know what to do with competitors and you can really start learning from them instead of just copying functionality that your clients have seen and like.
If you have any questions about these kind of usability tests, feel free to ask me and I’ll try to answer as best as I can. If you still think functionality of competitors should never be copied and have good reasons for this, I’m also welcoming you to let me know by commenting.
As marketers, we’re taught that the more we research and plan, we improve our chances of getting our marketing right. But is getting it right always the right approach? By doing small, experimental campaigns and learning from them, we can refine our marketing along the way.
Customers know more than marketers do
Doing some preliminary market research is great, but as the old saying goes, “The proof is in the pudding.” Our customers will validate whether our marketing hits the mark or doesn’t.
Nivea, a German skincare company, launched a “White is Purity” campaign that went down in history as one of the biggest marketing flops ever! I don’t know the details of how this campaign came to be, but I would imagine there were a lot of discussions around a boardroom with people whose job titles began with “C” who thought they had a brilliant idea.
The “White is Purity” hit social media as part of a larger campaign launch in the Middle East. As you can imagine, there was a huge backlash from customers and the company quickly retracted the ads.
If Nivea had used an agile marketing approach, they still would have failed miserably with this messaging, but they would have probably failed faster and for a lot less money. With agile marketing, they could have tested the messaging with one channel and gauged customer reactions. They would have quickly learned that they missed the mark and could have switched gears before getting themselves into a public relations nightmare.
Unfortunately for Nivea, they learned the hard way.
It’s riskier to play it safe
In American culture, we’re taught in school at a young age that there is only one right answer and we’re scolded for getting something wrong. This mindset is also commonplace in corporate America where employees are fearful of failure.
As marketers, this play-it-safe way of thinking will not cut it in today’s digital world where consumers see thousands of marketing messages every day. We need to be able to free our teams up to experiment, to work in ways that they never have before and to reward learning as much as we reward getting it right.
With agile marketing, organizations embrace the idea of validated learning through quick marketing experiments. We get away from months and months of upfront planning and plan continuously and iteratively.
Waiting to release our marketing until we think we have it right may feel safer and more comfortable, but it’s actually riskier! When we release our marketing campaigns in a big bang approach there can only be two outcomes: we nailed it or failed it! And when we fail, we fail big!
It’s cheaper to fail fast
I’ve worked with a lot of leaders who don’t want to waste time letting the team experiment with new ideas and test them in the marketplace. They are so caught up with pre-promised deadlines and have committed their teams to more work than they can handle that experimentation and innovation can’t happen without other work falling through the cracks.
While it may seem expensive or wasteful to experiment, it couldn’t be further from the truth. It’s a lot less expensive to learn that our ideas are bad in a week or two and then having the ability to pivot.
I once worked at a large bank that got caught up in the never-ending planning loop, fearful to release anything that wasn’t perfect. The planning went on for years, costing them millions of dollars before ever getting anything into the hands of their customers.
How you can build in experiential learning
So now that you know that the best judge of your marketing is your customers, that it’s riskier to play it safe and wait until your marketing is perfect and that it’s cheaper to fail fast, what are you going to do about it? Here are a few ways that you can start building in experiential learning at your company.
1. Give the team a problem to solve
When you tell the team exactly how to work, it stifles innovation. Instead, give the team a problem and let them figure out how they’re going to solve it. For example you may say, “We need to build brand awareness among millenials with our new tennis shoes. I’d like you to brainstorm as many ways as you can to do this.”
It doesn’t matter if the ideas come from graphic designers, developers or copywriters. Everyone on the team should work together to help solve the problem.
2. Put ideas to the test
Give the team a set timeframe for working through their ideas, such as a week or two and then test the various ideas in the marketplace. Your customers will tell you which ones are working the best and which ones flop.
3. Keep iterating based on what you learned
Let’s say you ran 10 experiments and three performed really well. Take the top three and build upon them. Now you have validation that customers resonated with your messaging and you know for a fact, not a boardroom guess, that you’re on the right path.
Being a modern marketer is all about getting away from big bang campaigns and working in shorter, experiential ways that help bring us closer to what really resonates with our customers.
Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.
I want to find out what happens when we bring machine learning to cheap, robust devices that can have all kinds of sensors and work in all kinds of environments. And I want you to help. The kind of AI we can squeeze into a US $30 or $40 system won’t beat anyone at Go, but it opens the door to applications we might never even imagine otherwise.
Specifically, I want to bring machine learning to the Arduino ecosystem. This has become recently possible thanks to improvements in hardware and software.
On the hardware side, Moore’s Law may be running out of steam when it comes to cutting-edge processors, but the party’s not over when it comes to microcontrollers. Microcontrollers based on 8-bit AVR processors dominated the Arduino ecosystem’s early years, for example, but in more recent years, embedded-chip makers have moved toward more powerful ARM-based chips. We can now put enough processing power into these cheap, robust devices to rival desktop PCs of the mid 1990s.
On the software side, a big step has been the release of Google’s TensorFlow Lite, a framework for running pretrained neural networks—also known as models—on so-called edge devices. Last April, IEEE Spectrum’s Hands On column looked at Google’s Coral Dev Board, a single-board computer that’s based on the Raspberry Pi form factor, designed to run TensorFlow Lite models. The Coral incorporates a dedicated tensor processing unit and is powerful enough to process a live video feed and recognize hundreds of objects. Unfortunately for my plan, it costs $150 and requires a hefty power supply, and its bulky heat sink and fan limit how it can be packaged.
But fortunately for my plan, Pete Warden and his team have done amazing work in bringing TensorFlow Lite to chips based on ARM’s Cortex family of processors. This was great to discover, because at my open-source hardware company, Adafruit Industries, our current favorite processor is the 32-bit SAMD51, which incorporates a Cortex-M4 CPU. We’ve used the SAMD51 as the basis for many of our recent and upcoming Arduino-compatible boards, including the PyGamer, a simple battery-powered gaming handheld. What if we could use it to literally put AI into people’s hands?
Warden had created a speech-recognition model that can identify the words “yes” and “no” in an analog audio feed. I set about seeing if I could bring this to the PyGamer, and what I might do with a model that could recognize only two words. I wanted to create a project that would spark the imagination of makers and encourage them to start exploring machine learning on this kind of hardware.
I decided to make it as playful as possible. The more playful something is, the more forgivable its mistakes—there’s a reason why Sony’s artificial pet Aibo was given the form of a puppy, because real puppies are clumsy and sometimes run into walls, or don’t follow instructions.
I recalled the original Tron movie, where the hero is stuck in cyberspace and picks up a sidekick of sorts, a single bit that can say only “yes” or “no,” with an accompanying change of shape. The PyGamer has a 1.8-inch color display, with 192 kilobytes of RAM and 8 megabytes of flash file storage, enough to display snippets of video from Tron showing the bit’s “yes” and “no” responses. The PyGamer’s SAMD51 processor normally runs at 120 megahertz, which I overclocked to 200 MHz for a performance boost. I connected an electret microphone breakout board to one of the PyGamer’s three JST ports.
Then I turned to the trickiest task: porting the TensorFlow Lite ARM code written by Warden and company into a library that any Arduino programmer can use (although not for every Arduino board! Even as a “lite” framework, the RAM requirements are far beyond the 2 kB of the Arduino Uno, for example).
I found the source code well written, so the biggest challenge became understanding how it handles incoming data. Data is not digested as a simple linear stream but in overlapping chunks. I also wanted to expose the code’s capabilities in a way that would be familiar to Arduino programmers and wouldn’t overwhelm them. I identified the functions most likely to be useful to programmers, so that data could be easily fed into a model from a sensor, such as a microphone, and the results outputted to the rest of the programmer’s code for them to handle as they wish. I then created an Arduino library incorporating these functions, which you can find on Adafruit’s Github repository.
Putting it all together, I wrote a short program using the new library. Now when I press a button and speak into the PyGamer’s attached microphone, the appropriate Tron clip is triggered if I say “yes” or “no,” letting me walk around with my own animated sidekick.
Although this project is a limited (but fun) intro to machine learning, I hope it persuades more makers and engineers to combine AI with hardware explorations. Our next steps at Adafruit will be to make it easier to install different models and create new ones. With 200 kB of RAM, you could have a model capable of recognizing 10 to 20 words. But even more exciting than voice recognition is the prospect of using these cheap boards to gather data and run models built around very different kinds of signals. Can we use data from the PyGamer’s onboard accelerometer to learn how to distinguish the user’s movements in doing different tasks? Could we pool data and train a model to, say, recognize the sound of a failing servo or a switching power supply? What surprises could lie in store? The only way to find out is to try.
This article appears in the August 2019 print issue as “Making Machine Learning Arduino Compatible.”
Editor’s note: Limor Fried is a member of IEEE Spectrum’s editorial board.
Whether you already have some coding experience or are starting from scratch, the good news is that you don’t have to be chained to your desk in order to learn how to code.
What’s even better is that you don’t have to spend years mastering programming either. Thanks to numerous coding apps available for both Android and iOS devices, you can easily level up your coding skills even when you’re on the go.
In this post, we’ve rounded up the best coding apps that will help you learn to code like a pro.
Programming Hero is an Android-only app that offers personalized paths towards coding mastery. The emphasis of this app is on making coding fun and it does so through teaching you how to build your own game while learning how to code at the same time. The app also offers forums where you can chat with other app users and exchange knowledge and tips.
Easy Coder is an Android app that focuses on teaching Java programming exclusively. The app provides step-by-step interactive lessons, quizzes, and programming challenges which will teach you the basic programming concepts and prepare you to develop your skills even further. Lessons are short and concise so that you consume them even if you’re on the go and the app also comes with practical examples to clarify the programming concepts.