The Memetic approach to managing Fake News

Fake News 2x2

“Fake News” – that is, the passing off of lies, spurious facts or dodgy narratives as real news, has become an issue du jour after the election of Donald Trump, and there has been a lot of talk about how to reduce it. Opinions vary on how much it actually impacted the US election  but there is a valid worry that it will get worse. At DataSwarm we started to look at what it is, how it works and how we can control it. We have published some of this over here.

What is Fake News?

When we looked at what was defined as “Fake News”, we quickly realised that Fake News” is a continuum, there is not a step transition from “Everything is True” to “Totally Fake News”.  It tends to vary on a number of axes (see diagram above) – the facts/falsehoods used, the narrative chosen, non-news- i.e. being “economics with the truth”, even the words chosen can have resonances of their own. Also, as facts stay away from hard science, they become harder to pin down. And of course people are quick to see “Fake” needles in largely true news they disagree with, yet will refuse to see the “Fake” logs in news they fundamentally agree with. In other words, the definition of what is Fake and what is not is one of degree, not tight definition, and thus what is “unacceptably fake” will be one of judgement. The question will always be “where do we draw the line”.  Thus any filtering system needs to be able to have toggles that can be adjusted, and tools to look at levels of “Fakeness”.

Identifying Fake News – Current Approaches

The 4 main approaches being used right now are (i) Fact checking, (ii) blacklisting known Fake News sites and/or (iii) whitelisting approved News sourcesand (iv) Searching for fake stories.

The main problem with verifying “Facts” is that they are very time consuming to check. Organisations like Snopes need real people to do this. One can use part time volunteers, or Mechanical Turkers, but this has a cost and speed impact, and the volume of False information is far larger than they can cope with.

The main problem with blacklisting and is keeping up with the changing fake ecosystem. Botkilling is within the competence of most good tracking systems but its an arms race.

Boolean search is fine if you know what you are looking for but is slow if you don’t.

So the current systems are not scalable as is, and need to have automated help. However, Artificial Intelligence is just not there yet, so fact checking is for some time yet going to be human-centred with increasing help from algorithms, primarily to:

  • reduce the total workload by finding the obvious fake and non fake stories
  • alert humans to emerging stories that may be fake and need to be checked
  • help design ways to neutralise them

Memetic Innoculation

There is some fascinating research emerging on Memetic Innoculation, how one can innocculate minds against Fake News by seeding media with diluted memes of the Fake News. The key is to be able to act fast, even pre-seeding via predictive analytics. To do this requires being able to see them in formation. As our systems use memetic analysis (see bellow) they are good at finding and reverse engineering Fake News memes to design antidotes

Our Approach to Fake News Manangement – Memetic Analysis

Our system is built around memetic analysis (an example is shown in this video), and this is proving to be quite effective for picking up Fake News because typically “Fake News” producers are trying to memetically engineer their output go as viral as possible – and thus Fake News has a number of memetic markers which one can test for. This simplifies and speeds up the process.

Our DataSwarm Analytic Engine has the the following capabilities:

  • Track: Finds and watches for the formation of fake news is specific dataswarms. Scans for the memetic characteristics above – Source, Transmission, Narrative and Emotion – and scores the meme for its likelihood of being “Fake”.
  • Alert:  These can then be prioritised and sent for human operators to examine. It is then configured to set alerts using a series of adjustable levels. Ideally one would “train” it first on sample history and then start to tune it. It is usually good to have several levels of alert as Fake News is not an absolute.  Also it is important as one needs to prevent False Positives occurring, ie cutting out news that looks fake but is not.
  • Memetic Innoculation – the system provides the analysis of transmission channels and meme structure to allow design of memetic innoculation.
  • Predict: Watching the dataswarms for breakout memes and using a number of other analytical techniques it is possible to predict likely Fake News early and get more time to design innoculation strategies. They won’t all be right, but do give extra tme to react
  • Dynamic Learning – as the system continues to operate it can start to dynamically adjust the scoring to increase accuracy, using the increasingly large historic dataset and accuracy feedback, thus it gets smarter as it continues the operation.

To be clear, the system needs to work as an aid to people – it is best used to track, alert and predict fake news to save effort and give early warning. It also needs to be tuned to the ecosystem that is being tracked. Over time it will get smarter with dynamic machine learning but no algorithmic or AI system today (and for quite some time) will outperform humans in such a fuzzy ecosystem.

Would you like to know more about the DataSwarm approach? If so contact us: contact@dataswarm.tech

To find out more about us click here: DataSwarm