“Fake News” – that is, the passing off of lies, spurious facts or dodgy narratives as real news, has become an issue du jour after the election of Donald Trump (which we called correctly), and there has been a lot of discussion about how to reduce it. Opinions vary on how much it actually impacted the US election but there are valid worries that it will get worse. At DataSwarm we started to look at what it is, how it works and how we can manage it.
What is Fake News?
When we looked at what was defined as “Fake News”, we quickly realised that:
- “Fake News” is a continuum, i.e. there is not a step transition from “Everything is True” to “Totally Fake News”. It tends to vary on a number of axes – the facts/falsehoods used, the narrative chosen, non-news- i.e. being “economical with the truth”, even the words chosen can have resonances of their own.
- While it’s (fairly) easy to tell True from Fake news in hard sciences, and from empirically ascertainable data, news that deals with softer subjects or statistical studies, predictive models etc also means dealing with fuzzier facts so “What is True” becomes harder to actually be certain of. Lies, damned lies, statistics and “dodgy dossiers” are all hard at work here.
- By the time one moves to beliefs – religion, politics etc – what is true and what is fake is near-impossible to define as there are no agreed reference points and it becomes a matter of view.
- There is actually not a lot of purely “True News” out there. Most media organisations are biased in one way or another. Often this is overt – you know where they are coming from – but also often it is not. Also within a media organization there is often a variance of true/fake, for example between the News Reporting and the Op/Ed pieces.
- Also worryingly, people seem very quick to call “Fake News” at something they disagree with (even if it is largely true), but are very unwilling to call “Fake News” for news they do like.
In other words, the definition of what is Fake and what is not is one of degree, not tight definition, and thus what is “unacceptably fake” will be one of judgement. The question will always be “where do we draw the line”. Thus any filtering system needs to be able to have toggles that can be adjusted.
The Neuroscience of Fake News – Lizard Brains, Confirmation Bias and Bubble Living
(This is a short blog post, the field is complex and still evolving so apologies if this summary feels a bit like pop-psychology. The aim is more to note what is being found to be true so far)
We are all suckers for Fake News, for a number of reasons that are “hard wired” into us. Marketers have known this for some time (arguably, Fake News is just another application of marketing technology) and have developed multiple tricks to worm past our analytical, “Conscious Brain” (the NeoCortex) to get to our more primitive Limbic brain (often colloquially called our “Lizard brain”) that reacts emotionally and intuitively to events, and runs our “autopilot” systems (this is the system that can auto-drive you home). In short, the Lizard brain works quite fast, is fairly low energy to operate is “always on”, but is driven by instinctive responses. The Conscious brain is logical and analytical, but this is a slow process and quite high energy to operate. This makes us reluctant to use it if unnecessary, and it’s also easy to tire it out. We thus tend to use it sparingly and divert anything familiar to the Limbic system, and means we also don’t like to keep on thinking about things if they are familiar. In short, with a bit of work one can design News stories that can sideslip the conscious brain and get straight to the Lizard brain with messages that play to all its fears, emotions and biases.
Marketers have developed a number of tricks to get past the Conscious brain, usually with messages which appeal to the strong survival drivers of the Lizard brain and kick off the strong emotions they produce, i.e:
- Fear – drives resistance to change, worry about scarcity, urgency
- Pain – “pain points” are everyday parlance for marketers, and this makes us very concerned about loss
- Ego – emotions driven by the ego are very powerful. Also “what’s in it for me” is a big driver
There is also neuroscience research emerging that shows our political views are heavily driven by how our brains are wired. Our makeup even drives whether we have Conservative or Liberal tendencies, so our politics are far more hard wired than we think. “Fake News” is designed to play to these very strong instincts, and also leads to the problem of Confirmation bias.
Confirmation Bias is the tendency to search for, interpret, favor, and recall information in a way that confirms one’s preexisting beliefs or hypotheses. People tend to display this bias when they gather or remember information selectively, or when they interpret it. The effect is stronger for emotionally charged issues and for deeply entrenched beliefs. People also tend to interpret ambiguous evidence as supporting their existing position.
If you study the methods of the Fake News factories they also do testing of messages to see what resonates best in any particular situation, like any good marketer, and then double down on those.
In addition, it’s becoming increasingly clear that social media exacerbates our tendency to orient towards news we like, to live in our own Information Bubble. In traditional media despite most media organisations being biassed, and even though we tended to select the media with a point of view we liked there still were a range of stories and codes that drove some level of balanced treatment. This is not true for “informal” media such as blogs, eZines etc. Social media, especially Facebook, further specialises in honing your news feed down to what you engage with most. So your level of agreement and comfort rises as it increasingly maps with our views, and it fills your feed with stories that reflect all your biases and less and less dissent enters your world. Also, the brain is plastic, it adapts itself to its environment, which creates a feedback loop, as your brain cycles itself further into its bubble. Our hypothesis is that this dynamic makes it harder and harder for people to tell fact from fiction the longer they exist in these bubbles.
Identifying Fake News – Current Approaches
The 4 main approaches being used right now are (i) Fact checking, (ii) Blacklisting known Fake News sites and/or (iii) Whitelisting approved News sources and (iv) Searching social media to spot Fake stories.
The main problem with verifying “Facts” is that they are very time consuming to check. Organisations like Snopes need real people to do this. One can use part time volunteers, or Mechanical Turkers, but this has a cost and speed impact, and the volume of False information is far larger than they can cope with.
Most search today is Boolean, so you have to pretty much know what you are searching for, thus this tends to be reactive.
So the current systems are not scalable as is, and it needs to have automated help. However, Artificial Intelligence is just not there yet, so fact checking is for some time yet going to be human-centred with increasing help from algorithms, primarily to:
- reduce the total workload by finding the obvious fake and non fake stories
- alert humans to emerging stories that may be fake and need to be checked
The problem with this approach is that it is slow and labour intensive.
The Memetics of Fake News
“Meme” or “Mental Gene” was a term originally used by Richard Dawkins in his book The Selfish Gene. In essence he showed that humans do not evolve only genetically, but they evolve culturally, and cultural ideas take root and adapt in people’s minds, influencing their behaviors as much (if not more, in todays societies) than physical evolution. In the original conception of memes he postulated that they evolved via natural selection, as memes continually jostled for “headspace” in a population. The ones less well adapted died out and those best adapted rose.
The Internet Meme, he believes, is a little bit different – they are deliberately constructed (Memetically Engineered) to be as attractive as possible, to “go viral” – like Genetically Modified seeds, except for concepts instead. In our opinion, this sort of “memetic modification” has been with humanity since we invented lies, and every new technology has been used to transmit such Fake News almost as soon as it is invented – Sealing Wax, Broadside ballads, the Dreyfus Telegram, Orson Welles’ “War of the Worlds” to give some past examples – but the always on, everyone connected online world has provided a delivery system far, far more powerful than anything seen before.
Our approach to media analytics uses memetic analysis, and this is proving to be quite effective for picking up Fake News because typically “Fake News” producers are trying to memetically engineer their output go as viral as possible – and thus it has a number of markers which a memetic anlaysis system can test for in trying to gauge whether a piece of news is likely to be Fake or not.
Identifying Fake News – a Memetic approach
Using memetic analysis it is possible to design algorithms to analyse for Fake news, this speeds up its detection and reduces labour. There are 4 main areas for Fake News detection:
Source:
Fake News is likely to originate from:
- A Fake News site, normally set up to look like a “genuine” news site. These can be Blacklisted but it is also important to actively search for them, dynamically
- Bots or “New” News sites that appear out of nowhere, are often used to propagate news from the above sites.
- The Mainstream Media is increasingly part of the problem – increasingly they run news that by many definitions above would count as “Fake”. There are some who argue that the current “clickbait” online business model pushed them more and more towards twisting of stories to sideslip regulations, others who say its just competition and lurid sells. Whatever the reason, memetic analysis makes it easier to look at news story by story relatively fast and analyse for fake content.
- It’s also important to know there are Fake “Fake News” sites, typically satirical news like Private Eye or The Onion, that would register as Fake News sites in any algorithmic approach – these are best Whitelisted
Transmission:
- Fake News is often used in tandem with a Bot network and is promulgated by these bots, and bot driven has an identifiable signature pattern, this can be seen best on Twitter and often hits there early so its useful as an early warning
- “Viral” movement – Fake News is designed to be emotive and be rapidly transmitted – to “go viral” – this too has an identifiable pattern
- Speed of information movement. Fake News appear and grows rapidly – this too is a marker
Narrative:
A Fake News meme is comprised of a number of submemes, deliberately inserted to make it as attractive as possible for reception and transmission. It will differ in a number of ways from “True” News in that there will be:
- Stand-out Fake submemes that won’t appear in other stories, and/or
- Links between True news memes and other memes that differ in structure from True news patterns
- There is frequently an image used, as this is an effective way to talk straight to our basic instincts – our Lizard Brains
Emotion:
As discussed above, Fake News is designed to dodge the “rational brain” and to get an emotional reaction from the “lizard brain” – but to do this it needs to have certain predictable properties. Some key indicators in the narrative that can suggest that it is Fake news include:
- Content which identifies strongly with the reader demands emotional involvement, often to extreme.
- Use of charged and emotive words and tropes – head and headline “grabbers” to excite the emotional brain
- Images – often sensational, again a well known way of avoiding the conscious brain
- Repetition – the more we hear and see Fake News the more it starts to sound true, so look for reinforcement
Memetic Innoculation
There is some fascinating research emerging on Memetic Innoculation, i.e. how one can innoculate people against Fake News by seeding media with diluted memes of the Fake News.It is early days but resaerch is showing some success.
The key is to be able to act fast, even pre-seeding meda via predictive analyics. To do this requires being able to see the data early, and then reverse engineer it, which the DataSwarm approach does.
Our Approach – using Memetic Analysis to track and neutralise Fake News
Our DataSwarm Analytic Engine was bult to do memetic analysis and can be configured for Fake News detection, including the following functions:
- Track: Finds and watches for the formation of fake news is specific dataswarms. Scans for the memetic characteristics above – Source, Transmission, Narrative and Emotion – and scores the meme for its likelihood of being “Fake”.
- Alert: These can then be prioritised and sent for human operators to examine. It is then configured to set alerts using a series of adjustable levels. Ideally one would “train” it first on sample history and then start to tune it. It is usually good to have several levels of alert as Fake News is not an absolute. Also it is important as one needs to prevent False Positives occurring, ie cutting out news that looks fake but is not.
- Memetic Inoculation – the system provides the analysis of transmission channels and meme structure to allow design of memetic innoculation. As it is based on memetic analysis it is good at deconstructing Fake News memes.
- Predict: Watching the dataswarms for breakout memes and using a number of other analytical techniques it is possible to predict likely Fake News early and get more time to design innoculation strategies. They won’t all be right, but do give extra time to react.
- Dynamic Learning – as the system continues to operate it can start to dynamically adjust the scoring to increase accuracy, using the increasingly large historic dataset and accuracy feedback, thus it gets smarter as it continues the operation.
To be clear, the system needs to work as an aid to people – it is best used to track, alert and predict fake news to save effort and give early warning. It can get smarter over time with dynamic machine learning but no algorithmic or AI system today (and for quite some time) will outperform humans in such a fuzzy ecosystem. However because our system was designed for memetic analysis it is better at handling fake news memes.
Would you like to know more about the DataSwarm approach to Fake News? If so contact us: contact@dataswarm.tech
If you want to look at the DataSwarm website for more information about us, click here.