Google’s systems cannot verify the accuracy of content; instead, it relies on signals that the company thinks aligns with “relevancy of topic and authority,” according to Danny Sullivan, Google’s search liaison, in a tweet posted from his personal account on September 9. This attracted attention from SEOs and kicked off a conversation.

Here are the tweet and the question that prompted it.

It’s not a popularity contest. When Bill Slawski, director of SEO research for Go Fish Digital, cited Google’s own explanation of how search algorithms work, interpreting them to mean that popularity determines confidence scores for content, Sullivan replied, “No. It is not popularity.” He then elaborated that popularity would be too simple a signal and possibly inapplicable to new queries, which constitute 15% of Google’s daily search volume.

In pursuit of more authoritative search results: Some history. In an attempt to improve the quality of its results, Google announced Project Owl in April 2017; the project placed more emphasis on authoritative content and enabled users to provide feedback for autocomplete search suggestions and featured snippet answers.

In November 2017, Google also teamed up with The Trust Project to bring more transparency to news content and combat the distribution of misinformation. One of its first steps was to enable publishers to add up to eight “trust indicators” to disclose information such as who funds the news outlet, the outlet’s mission, the author’s expertise, the type of writing and so on, via structured data markup.

In September 2019, the company updated its Search Quality Rater Guidelines to emphasize vetting news sources as well as YMYL content and its creators. It also expanded the basis for which a rater might apply the lowest ratings to content that may potentially spread hate.

The reaction. Sparktoro founder Rand Fishkin disagreed with the basis for Sullivan’s explanation, countering that machines can assign levels of accuracy to content, citing Google’s “fact extractions ranging from calculator answers to filmographies to travel info.”

Judith Lewis, founder of DeCabbit Consultancy, highlighted the complexity of the issue, adding that machine learning does “enable a degree of assessment of the accuracy of anything not related to personal experience.” Lewis also suggested that Sullivan’s answer may be meant to give Google a bit of leeway on the matter.

Jenny Halasz, president of JLH Marketing, echoed a sentiment that may be shared by many SEOs when she tweeted, “YES, a thousand times YES! Thank you @dannysullivan. This is a myth that will not die.” Halasz also pointed out the irony that Google itself provides search results with content claiming that accuracy is a ranking factor.

Why we should care. Content accuracy is important for users, but, as Sullivan explained, it’s not a Google ranking factor. Topic relevance and authority — not to be confused with popularity, which may result from the two — are the signals Google’s systems rely on to rank content.

About The Author

George Nguyen is an Associate Editor at Third Door Media. His background is in content marketing, journalism, and storytelling.