How is AxiaOrigin behind fake news?

6th October 2020

The key to a good lie is to convince you that there is no deception. There are forces constantly vying for your attention, in the news you read and the social media you consume. How can we be sure whether these forces have good intentions, or are sinister? And how are we, as creators and consumers of media, adding fuel to this fire?

Less than one month away from a US Presidential Election, the campaign trail is not the only battleground…

To find out more , listen to our latest podcast – part of a series on disinformation, the market failures that have led us here, and what urgently needs to be done to correct them.

 

A transcript of our podcast is enclosed below:

The key to a good lie is to convince you that there is no deception.

When deception is hidden within a hazy blend of truth and sincere belief, it can become impossible to spot. In fact, the lie can become attractive enough that you willingly embrace it, and share it in proud defence of your worldview. In our age of social media, the deception is repeated and retweeted thousands of times, becoming ever more credible in the minds of the people it touches. All this, without the censorship or verification of a single editorial mind. Within a matter of hours, this act of deception has become Truth; “fake news” has become legitimised in the consciousness of millions of citizens. As we become prey to these sinister campaigns, bad actors continue to succeed in promoting tribalism and sowing discord in our societies.

And this attack on our hearts and minds is happening every single day.

What does “fake news” mean anyway?

We can break down “fake news” in a variety of ways – some of the most prevalent terms used to distinguish between different types of false information are “misinformation” and “disinformation”. The key distinction between them has to do with the degree of intent in spreading malicious content. Misinformation is the incorrect or misleading spread of information. Whether or not there was intent to share a lie, misinformation is inaccurate information that causes people to be ill-informed.

Disinformation is somewhat more sinister. It is false information that is deliberately and often covertly spread, in order to influence public opinion and obscure the truth. In the latter half of the 20th Century disinformation techniques were developed and refined by intelligence agencies in their attempts to challenge people’s understandings and interpretations of the world. It is not a singular piece of information or a single narrative, but a campaign, a set of narratives deliberately conceived and spread to deceive for financial or ideological purposes.

Disinformation is the most pressing threat to our way of life

The most obvious sign of a disinformation campaign is the pressure placed on an audience to feel mistrust for other groups of people. This tension is most often created around politics of identity. Disinformation campaigns succeed when they convince an audience that its values, race, or belief system are under threat from people within their midst, from a dangerous and palpable “them”.

In Myanmar, we’ve seen the horrifying effects of such campaigns brought to their most tragic extremes. This is a land where Facebook is so widely used, that many of the nation’s citizens confuse the platform with the internet itself. Over a period of the five years up to 2018, Myanmar’s military personnel were the prime operatives behind a systematic Facebook campaign targeting the country’s mostly Muslim Rohingya minority. The military masqueraded as fans of pop stars and national heroes as they fomented their hatred on Facebook. In doing so, they turned the social network into a tool for ethnic cleansing. Human rights groups blame the anti-Rohingya propaganda for inciting murders, rapes and the largest forced human migration in recent history.

But you don’t have to look as far afield as South East Asia to see the contaminating effects of disinformation on society. We’ve seen evidence of foreign interference into the 2016 Brexit referendum, from fake social media accounts and state-sponsored media outlets, as well as interference into the 2016 US Presidential Elections. In these instances, successful attempts were made to attack the democratic process, serving to weaken a nation’s cultural and political institutions. And now, in the midst of a global health emergency which has claimed over one million lives, we see daily instances of hysteria-inducing disinformation. This is proliferated through a concoction of conspiracy theories; anti-vaxxers, anti 5G-protestors and Illuminati cultists intertwine as agents of the current infodemic.

One of the most disconcerting peculiarities about disinformation campaigns is that they succeed even when they are uncovered. Whenever bots fail and disinformation is debunked, a warped tautological injustice has been demonstrated to the social media user; truth is shown to be elusive, and paranoia pervades over who can really be trusted. All this breeds cynicism and heightens the stress across online fault lines.

The attention ‘marketplace’ – the role of supply and demand in how attention is captured and manipulated

In social media platforms, we have established an information system which has almost no barriers to entry, wherein individual citizens, celebrities, corporations, and political and cultural instructions can freely exchange ideas and where, crucially, the most popular and engaging ideas get pushed to the top of your news feed by data-driven algorithms. We have created a fully monetised “attention marketplace” with no independent regulation.

So what exactly is AxiaOrigin’s role in all of this? This marketplace is a hyper-competitive and saturated environment where everyone is vying for your attention as a consumer. The reality is that every person and organisation is trying to maximise their potential for capturing your attention, and we as AxiaOrigin, are no different. We employ deliberate techniques to try and capture your attention in a saturated market, and the title of this article, though ironical click-bait, is one of them. The ‘vying’ for attention is not itself the issue as much as the ‘market for attention’ is flawed as a whole.

Here’s the thing. We have always been “under attack” on the internet – and for many years we’ve accepted it as legitimate. It’s nothing new, and until very recently, it hasn’t felt like an attack at all. Different entities have always been developing intelligent campaigns to change the way we think, since before the digital age. This is exactly what advertising is – a profit-driven communications and marketing approach so widely accepted in our society that we hardly think twice about it when we see it.

Even beyond advertising, there is a spectrum of activities that content creators and media groups undertake to tell a compelling story or share an idea. People who are trying to raise awareness and successfully engage with online communities, often engage in exactly the same behaviours as these bad actors who intend to maliciously skew conversations. There is nothing inherently wrong with this approach, and it can be done earnestly. The issues arise from the fact that the market in which we operate is unregulated.

How can we get ahead of the threat?

Given that we have created this market without the necessary controls and regulations in place, and often without recognising it as a market complete with supply and demand dynamics, we need to start thinking about solutions to this pressing threat to our civilisation. Social media platforms such as Twitter, Facebook and Reddit were made for free and open discussion about different ideas. In the early days of their inception, the need to remove malicious automated or fake accounts was low as innovations in disinformation techniques were less mature. Nowadays, every month, Twitter automatically removes around 30 million malicious bot accounts.

We face a significant challenge; we have an information ecosystem which is relatively new, without gatekeepers and with complete democratisation and access to data, yet with a very human set of vulnerabilities and biases. There is an important need here to exercise caution and avoid jumping to potentially counterproductive ‘solutions’ like poorly thought-out regulation. We believe that the Digital and Tech community needs to cooperate and start a focused conversation. In this way we might co-create an effective and appropriate set of responses to a threat which has emerged, and is doing damage to our society and way of life.

AxiaOrigin is working in this topic area actively, and developing its own R&D to better understand and identify toxic and manipulative accounts and approaches across social media. In the coming weeks and months, we will be sharing more content in this area to help inform and educate those around us.

If this is an area within which you require support, or with which you would like to collaborate, please get in touch with our dedicated address directly to misinformation@axiaorigin.com. Alternatively, you can reach out to me directly at marios@axiaorigin.com. We really do feel strongly about this area, so if you have any interest in collaborating with us or sharing our thought processes to create new ethical responses, do get in touch. “

Menu