BOSTON, MA — At our MarTech Conference this week, Amos Budde, VP of applied data science at Civis Analytics, discussed the importance of data credibility and the ways it powers marketing decision making.

Data is at the foundation of our digital marketing efforts, but it is often faulty – and many marketers don’t think twice about it. Budde offered key points for marketers to consider when applying metrics to a marketing strategy in order to avoid setbacks and erroneous outcomes.

Don’t trust people’s reasoning

Budde addressed the faultiness of human reasoning, stating that most people are apt to respond based on their current circumstances and immediate sentiment.

“If you’re asking someone a question, they’re filtering that question through who they are in that exact moment, using biases to rationalize current opinions based on how they’re feeling, what they recently learned, etc.,” Budde said.

For marketers working with survey data, it can be challenging to pin down a person’s actual position on an issue, especially since human inclination is a moving target. Opinion research, Budde explained, is better suited to how someone feels today – not how they felt last year. This data is also better suited to gauge if someone likes an ad – not to determine if the ad changed their mind or will incite action.

Budde said it’s common for marketers to mistake popularity for persuasion. Marketers should study the nuances of persuasion directly, rather than relying on data to indicate if something was persuasive.

“We have found zero correlation between what people said they like, and what actually persuaded them,” Budde said.

Don’t trust your sample

Given the prevalence of opt-in panels and financial incentives, survey respondents are often not the ideal representation of the public. As an example, Budde explained how a 2016 analysis of Netflix yielded vastly inconsistent results between the actual data and the survey findings.

“We ran a web survey from a typical online panel provider and drew a weighted sample based on US consumers 18 to evaluate Netflix’s brand,” Budde said. “We found that 70% of the country was Netflix users. Netflix only reported 40 million users at the time, in a population of over 300 million. Even when we controlled and weighted based on age, income, and education, we got outlandish results.”

To glean more reliable results from the sample base, Budde recommends that marketers implement controls against known benchmarks – such as adding more demographic questions or questions that you may already know the answer to.

Get more data

Marketers are always seeking deeper analysis than the data allows for, Budde said. But if the data isn’t there, how can marketers dive deeper to extract the insights needed?

The answer? Get more data. Hypothetically, if data comprises about 10% of the costs of a quantitative research project, then doubling that research size will only increase total costs by 10%. Even in cases where data is more expensive, Budde said, there is a major relative benefit to having a larger sample size.

Be always-on

In our rapidly changing, omnichannel digital landscape, being “always-on” means that data is never out-of-date because the context is always accessible. Data automation, in particular, ensures this.

Budde suggested, “The next time you are working with social science metrics, ask yourself (or your vendor): Can I get this refreshed on an automated basis at low cost?”

Rather than looking at past data to interpret how marketing efforts performed during a specific time period, an “always-on” approach, when enabled by marketing technology, can blend past and present data to measure current impact and external factors.

More about the MarTech Conference

About The Author