According to this year’s Relevance Report 2020 from the USC Annenberg Center for Public Relations, half-truths and lies are more noteworthy (and acceptable) than ever.

The report features 17 of the 30 essays that deal with or in deceit, aided and abetted by the advancement of technology and social media in the post-truth age.

Erasing the line between paid and earned media

The rate at which influencer marketers are ignoring disclosure guidelines set by the FTC is alarming.

The FTC’s “Dot Com Disclosure Requirements” [PDF] are designed to help the public understand whether or not someone endorsing a product online was compensated.

But ignorance does not appear to be the cause of these violations.

“According to a study conducted by the influencer marketing agency Mediakix, only about 7% of endorsements on social media from the top 50 celebrity influencers comply with FTC’s guidelines of appropriate disclosure verbiage,” writes Cathy Park, a second-year strategic public relations graduate student at USC Center for Public Relations in the Relevance Report.

“Furthermore, Harvard Business Review reported that 28% of influencers were requested by the sponsoring brand to not disclose the partnership. It seems like the ability to deceive has somehow become tied to an influencer’s worth,” Park says.

More than one in four posts by online influencers ignore their duty to disclose in deliberate, profit-motivated acts of defiance.

Artificial intelligence and bias

According to Gartner Research, by 2023 one-third of all brand public relations disasters will result from data ethics failures. And the Relevance Report gives a concrete example.

With interest in artificial intelligence peaking, Burghardt Tenderich, Ph.D., associate director of the USC Annenberg Center of Public Relations explains the problem of bias in developing human-guided, ethical machine learning.

“…AI algorithms can also lead to false conclusions. In the Facebook example, this is due to the common practice by social media companies to deploy technology that is half-baked, at best. At the core of an ethical examination of AI is the desire to understand how decisions are made and what the consequences are for society at large. For that reason, policy makers are calling for AI to be explainable and transparent so that citizens and businesses alike can develop trust in AI.”

– Burghardt Tenderich, Ph.D.

Speak no evil

The essay by a corporate spokesperson from Google says, “Each of our products is designed with an emphasis on privacy and security, including easy user interfaces and features like privacy Check-Ups, which allow people to control their data and keep their accounts safe and secure.”

But there is one stark omission: Project Dragonfly.

Project Dragonfly is a search engine prototype Google created to be compatible with China’s state censorship provisions and which also provided the government with a means for retrieving a users search history by searching their phone number, essentially abandoning their “don’t be evil” motto.

The project was leaked to the public through a Google employee and confirmed to have been terminated in July during a Senate hearing. [Google has denied the project was planned to be launched.]

What’s perhaps most disheartening is that even the USC Annenberg School for Communication and Journalism — consistently ranked first according to the QS World University Rankings — is unable to ferret this post-truth omission from its own academic research, and that the search giant saw spinning the truth in academic research as fair game.

But then if Donald Trump can get away with claiming there was never a drought in California and still get elected president, why shouldn’t corporations be able to ignore their blemishes, particularly when criticizing politicians and brands on social media carriers the risk of public interrogation and verbally abuse?

Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.

About The Author


Public Type Works enables the creation of new open source fonts.

See those three fonts above? They are just the latest that talented type designers would like to make and give away.  If enough people show interest by chipping in with a small financial contribution (comparable to a cup of coffee/tea/whatever), we can bring these fonts to life.

Public Type Works is a platform for individuals, informal groups, and foundries.

We love type and we love open source projects, so we created a space for designers to release fonts to the world while still putting food on the table.

Feel like you and your work belong here? Cool, let’s go

The Process


Without going through the process of designing a fully functioning font, type designers create mockups of what the almost finished font will look like. They then post it here, with an overview of what is to come, including language support, proposed styles, and a profile outlining their type design experience.


The designer decides how many people they need support from. If the final deliverable includes just one style, the goal might be a few hundred people. If it’s a variable font, the goal might be 1000 people. The public then can show their support by pledging a few dollars. This gauges interest in turning the draft into a finished typeface.


When the goal is met, the designer completes the font and delivers it here. Then (and only then), pledges are processed and the funds are distributed to the designer. The font is released with an Open Font License, so anyone can get the font for free and use it for commercial purposes. Supporters will have it delivered to their inboxes right away.

That’s the brief overview. Want some more juicy details?