Influence with Integrity: An Answer to Weaponized Information

 

Part 1

When Edward Bernays rebranded propaganda as public relations, he made systematic influence more palatable, but not more sustainable. The industry Bernays created has found itself on a sure path to increasingly marginal relevance ever since systematic influence started to lose ground to algorithmic influence, and businesses formerly known as PR and advertising firms yielded the center of the Influence Matrix to the ominous-sounding FAANGs (Facebook, Apple, Amazon, Netflix, Google).

Despite the deep disruption of the traditional media ecology, the change in the prevailing methods of influence remains largely quantitative. Instead of better aligning the interests of organizations and stakeholders, the ascendency of the FAANGs and other technologies of mass persuasion has mainly automated and super-charged the subversion of human attention. As a result, the effectiveness PR-as-conceived-by-Bernays, has spiked sharply ever since FAANGs emerged as the new captains of humanity’s collective consciousness.

In the meantime, the new markets of the Attention Economy remain starved for integrity — not another rebrand of corrupted or weaponized information, but, at the very least, a new consensus about the governance and accounting standards for the trade in the new scarce economic resource. These standards would help reduce the pollution of our media ecology and provide clear guidance to organizational leaders and professional communicators. Without these stabilizing forces, the Attention Economy will remain: 1) a kleptocratic AI for the high-frequency manipulation of human attention, often in the service of unholy goals, and 2) highly vulnerable to the viral impact deep fakes and other tools of informational warfare.

Since I started writing and speaking about PR’s sustainability roadmap, I’ve heard lots of lip service, but I’ve also seen a growing cadre of organizations take the ethics of influence seriously. Many other industries have embraced sustainability sooner and more whole-heartedly. In financial markets, for example, we’ve seen environmental, social and governance (ESG) risk metrics move from the fringes of the investment community to the normative center.

Despite institutional inertia, the shift to sustainability in capital markets happened when the pain of the status quo started to exceed the pain of change. After several watershed moments (particularly circa 2001 and 2008), traditional asset managers seemed far less inclined to view the “value of values” as a hollow slogan; increasingly, they found, it’s a way to mitigate the impact of mispriced risk (e.g., accounting fraud).

The influence industry — whether it is dominated by FAANGs, flacks or mad men — will not improve its sustainability either through lip service or through hortatory preaching. But improvements can happen, the way they did in the investment community, through the development and deployment of new taxonomies of “red flags” that reveal misalignments of organizational beliefs, communications and actions.

Of course, commitment to sustainability is no panacea. For example, in my research on deep fakes over the weekend, I was struck by the strong consensus among experts in AI and disinformation that we — as a society, as organizations and as individuals — are essentially defenseless against the rise of deep fakes because technology has made it cheaper than ever to be bad. Lately, much of this polemic has focused on the vulnerabilities of the political system, but all organizations and individuals are vulnerable to deep fake attacks, false reviews, fake blogs, bogus websites, pseudo-events, astroturfing, and Trojan Horses.

In a media ecology so hopelessly polluted with corrupted information, it seems naïve to speak of integrity and sustainability; indeed, the Attention Economy may be thought of as a modern-day Sodom and Gomorrha, arguably beyond redemption. However the Attention Economy evolves, it will continue to amaze me that it doesn’t rank higher on lists of trending topics, an oddity eloquently explained in David Foster Wallace’s famous This Is Water speech.

This is Water 2

Part 2

Below are highlights from my readings on the subject over the weekend.

Deepfakes are coming. Is Big Tech ready?

“The opportunity for malicious liars is going to grow by leaps and bounds,” said Bobby Chesney, professor and associate dean of the University of Texas School of Law who has been closely researching deepfakes.

Twitter, YouTube, and Reddit also are natural targets for deepfakes, and you can expect to see fringe platforms and porn websites flooded with them. Yet asked by CNNMoney just what they’re doing to prepare for this looming problem, none of the major social media platforms would discuss it in detail.

Deepfakes could pose a greater threat than the fake news and Photoshopped memes that littered the 2016 presidential election because they can be hard to spot and because people are — for now — inclined to believe that video is real.

Aviva Ovadya, chief technologist at the Center for Social Media Responsibility, said social media companies are “still at the early stages of addressing 2016-era misinformation,” and “it’s very likely there won’t be any real infrastructure in place” to combat deepfakes any time soon.

Outnumbered: From Facebook and Google to Fake News and Filter-bubbles by David Sumpter – review

It’s easy, when faced with the numbers at hand, to succumb to a kind of vertigo: Facebook has two billion users, who make tens of millions of posts every hour. From this data, along with millions more photos, likes and relationships, Facebook builds models of all of us that extend in hundreds of dimensions – the puny human mind, at best, is capable of visualising four.

The same no-nonsense approach is deployed to debunk lazy assertions that we are all fooled by fake news stories, or trapped within filter bubbles that mindlessly reassert our prejudices. We are, apparently, both smarter and more aware than that.

Through constant reference to other systems, Sumpter demolishes any idea that our current social media woes are the fault of, or even originate with, Facebook and its ilk.

The algorithms work in one context, think the good folk at Facebook and Google, so they will also work in another – until the platform is seized by demagogues, or used to censor and surveil entire nations. The suspicion arises that perhaps our algorithmic crisis isn’t a maths problem at all.

As one researcher notes, regarding the biases in language translation systems, “there is no real way of fixing the problems caused by unsupervised learning without fixing racism and sexism first”.

 

 

 

 

%d bloggers like this:
search previous next tag category expand menu location phone mail time cart zoom edit close