Deepfakes and distortions, scare tactics and suppression: NYU report examines social and the 2020 US elections

A new report from the NYU Stern Center for Business and Human Rights has explored the various forms of disinformation which could predicate the 2020 US presidential election – and how social media companies need to react to it.

Allegations around interference from Russia in the 2016 elections continue to be prevalent. As the Mueller report put it – and as reported by CNN – Russian hackers compromised local election systems of two Florida counties in 2016. Mueller’s appearances on Capitol Hill in July, as CNN put it, came away with the key takeaway that interference was ongoing. “They’re doing it as we sit here,” said Mueller.

For 2020, there are eight key predictions the NYU report makes:

  • Deepfake videos will be deployed to portray candidates saying and doing things they never said or did
  • Instagram, rather than Facebook, will be the vehicle of choice for those who wish to spread meme-based disinformation
  • Digital voter suppression will again be one of the main goals of partisan disinformation
  • For-profit firms based in the US and abroad will be hired to generate disinformation
  • Domestic disinformation will prove more prevalent than false content from foreign sources
  • WhatsApp may be misused to provide a vector for false content
  • Iran and China may join Russia as sources of disinformation
  • Unwitting Americans could be manipulated into participating in real-world rallies and protests

Perhaps the most interesting proclamation is around Instagram being more dangerous than Facebook in terms of sharing misinformation. Instagram and WhatsApp – both owned by Facebook – are both cited in the report.

The NYU research cites a story from The Verge in March which warned that anti-vaccine posts and general misinformation were ‘rotting’ the photo-sharing platform. What’s more, the Senate Intelligence Committee report found that Instagram generated more than twice the engagement with Russian disinformation in 2016 as Facebook.

Instagram has been taking steps to improve its experience. Regular readers of this publication will note the company’s initiative in ‘hiding’ likes, with the latest launch taken at almost the same time as Kahlua launched its ‘zero likes given’ campaign. The reason behind it, as a Facebook spokesperson said at the time, was to ‘remove pressure’ and allow users to share ‘authentically and comfortably’ on the site.

At the other end of the scale, likes can also be gamed and bots employed to ramp up the numbers. Yet while this and other initiatives around reporting false information are noted by the report, Instagram can still do more.

“The problem isn’t a lack of technology. It appears to be a lack of a clear strategy for addressing the serious problems inherent in Instagram’s operating model,” the report notes. “Instagram, to be sure, has made progress in certain areas. But these steps haven’t cured the platform’s burgeoning reputation as a vehicle for false content.”

Deepfaking is going to be an almost inevitable part of the campaigns. The highest-profile targets of previous attacks, as sister publication AI News has pointed out, include Donald Trump and Mark Zuckerberg. The report notes that legislation in the US Congress has been introduced to criminally punish those who make deepfakes without disclosing the modifications, but advocates an alternative approach.

“A better approach, and one that avoids the danger of overreaching government censorship, would be for the social media platforms to improve their AI-screening technology, enhance human review, and remove deepfakes before they can do much damage,” the report explains.

This is naturally not without its problems. 500 million tweets are sent each day while 300 hours of video is uploaded to YouTube every minute, for instance. Yet current initiatives are praised by the report; Google has been assisting outside research groups working on deepfake detectors, while Pinterest has ‘blazed a path’ in terms of removing harmful content.

The last recommendation the report gives to social media companies is arguably the widest-reaching. Social media literacy – its dangers as well as its benefits – needs to be drummed in as often as possible.

“The platforms ought to make digital literacy lessons a permanent and prominently available feature on each of their sites,” the report explains. “Doubtless the companies would prefer not to remind users every time they log in that disinformation casts a shadow over social media. But that’s the reality.

“The more often users are reminded of this fact – and are taught how to distinguish real from fake – the less influence false content will wield,” the report adds. “Concise, incisive instruction should be just one click away for all users of all of the platforms, all of the time.”

You can read the full report here (pdf, no opt-in).

Interested in hearing leading global brands discuss subjects like this in person?

Find out more about Digital Marketing World Forum (#DMWF) Europe, London, North America, and Singapore.  

Author

  • James Bourne

    James has a passion for how technologies influence business and has several Mobile World Congress events under his belt. James has interviewed a variety of leading figures in his career, from former Mafia boss Michael Franzese, to Steve Wozniak, and Jean Michel Jarre. James can be found tweeting at @James_T_Bourne.

View Comments
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *