The Real Problem of Disinformation

According to the UK Government, which has set up a dedicated Counter-Disinformation Unit, there is a ‘danger that hostile actors use online disinformation to undermine our democratic values and principles’. The US Department of Homeland Security has elevated disinformation to the level of a threat to critical infrastructure. Throughout the West, and led by the European Union, new legislation is being ushered in with sweeping provisions against online disinformation. Academic commentators have not been slow to pick up on the message: ‘the threat disinformation poses to healthy democratic practice’, argue Freelon and Wells, has made it ‘the defining political communication topic of our time’.

But there is a basic problem with that official view of the problem: the criteria for identifying alleged cases of disinformation are self-contradictory. Those leading the ‘fight’ against it are even aware of this, as seen in email exchanges between U.S cybersecurity appointees (here at p.15), but this has not affected their determination to press on, even if it means violating citizens’ first amendment rights.

This, I shall argue, is the real problem of disinformation: not simply that the term is applied in self-contradictory ways but that it is used as a supposed justification for suppressing any information or ideas that governments and the powerful groups that influence them find inconvenient.

To illuminate this problem, there follows an outline of some key points from my academic article ‘The problem of Disinformation: A Critical Approach’ (published open access in Social Epistemology 2024).

1

The term ‘disinformation’ is used in a variety of ways, and although it is taken to name a problem, there are contrasting views of the nature of the problem. So, a second order problem is the lack of agreement about what ‘the’ problem of disinformation actually is. Policies with far-reaching implications are being proposed on the basis of academic papers about ‘disinformation’ that incorporate conceptual confusion. This means the advice they offer should not be assumed to be sound.

Especially troubling are those papers that purport to contribute to combatting disinformation. For measures proposed to combat one of the problems flagged as disinformation can potentially give rise to other problems as discerned from the perspective of different framings. The effect could be damaging for society. A further consequence is that a sense of arbitrariness then permeates public awareness, encouraging the impression that the term is often used simply to discredit ideas that the user disapproves of. Regrettably, as we shall see, that impression is not mistaken.

The basic trouble is that although ‘experts in disinformation’ quite widely distinguish the category, in principle, from misinformation and malinformation, that tripartite analytical framing is not then consistently applied in practice. The term disinformation, in theory, refers to information that is false and harmful, by contrast with misinformation (false but harmless) and malinformation (harmful but true); but in practice this distinction is not always observed. We find the term being applied to putative cases of ‘disinformation’ that are neither false nor harmful.

Studies directed to understanding how best to combat disinformation have come to predominate in the academic literature – a Google Scholar search for “combat disinformation” yields 1,520 entries, whereas “identify disinformation” gets just 388, and “analyse disinformation” a mere 38. This indicates the lack of interest in how to reliably identify particular cases of it. Proponents of the combative approach offer advice on how to fight particular putative cases of ‘disinformation’ without ensuring they actually fit that description.

In contrast to the combative approach, a critical approach, as advocated here, recognizes the need to carry out three distinct investigative tasks to ascertain that there is a case of disinformation before attempting to formulate advice on what to do about it: an epistemic investigation determines whether a proposition communicated is false; a behavioural investigation determines whether the communication is coordinated; and a security investigation determines whether communication of the proposition is harmful.

Discussions in the academic literature generally assume disinformation to be a problem in just one, or sometimes two, of the three possible ways noted. In one section of the literature, the term is used of any information that is misleading – like mistaken facts – and is thus used interchangeably with the term ‘misinformation’. This framing of the problem allows clear focus on the criterion of propositional falsehood, and so has the benefit of being well-defined; but it misses what, from another perspective, crucially distinguishes dis-information from misinformation, namely, an intention to deceive. From this second perspective, innocent mistakes would not count as disinformation. The distinct problem of disinformation would, rather, be encountered in instances of deceptive strategic communications such as may be manifest, for example, in ‘coordinated inauthentic behaviour’ in social media. These can operate without necessarily spreading false information but instead by selectively presenting and omitting truths. A third view of what the problem of disinformation is focuses primarily not on erroneous information nor on coordinated intent to deceive but on how the circulation of certain ideas may have damaging effects on the fabric of society or its institutions. This concern provides much of the impetus for the current burgeoning of interest in – and funding for research into – the topic of disinformation. However, a problem associated with it is that if certain ideas can have harmful effects, this may be for reasons other than their being false. In fact, acknowledgement of this point has led to the coining of the term malinformation to refer to ideas that may be true and yet whose dissemination is said to have harmful effects. Those concerned about harmful ideas, however, in practice do not necessarily distinguish clearly cases of disinformation from malinformation, for issues of truth and falsity are not their focal concern.

Across the academic literature then, as well as in public debate, the term ‘disinformation’ serves as something of an umbrella term, covering various forms of communication that may be false and/or selectively true and/or harmful, depending on which features are assumed to be defining of it. The nature of the problem it is taken to indicate will accordingly vary: as false information it is an epistemic problem; as a strategy of deception, it is a problem of the unacceptable activities of coordinated agency; as disruptive ideas, it is a threat to the security, legitimacy or some other public good of a social or political order. Any one of these problems might be encountered in a given situation without either or both of the others necessarily being present. This means that different investigations based on the three distinct conceptions would not always agree with one another on whether, where or when a problem of disinformation had arisen.

2

Despite that array of mutually inconsistent conceptualisations, it is nevertheless in principle possible to offer a cogent characterization of disinformation. This involves thinking back to the kind of case it was used of at a time before its recent popularisation. Originating in the world of counter-intelligence activities, a good illustration is provided by the World War 2 deception, Operation Mincemeat (which was also the subject of a 2021 film). In 1943, the British fed misleading information to the Germans about allied invasion plans by planting fake documents in a briefcase chained to a corpse in an officer’s uniform to be found washed up on the Spanish coastline. The documents, as hoped, were passed to the Germans who, believing them genuine, diverted their defence troops in response, thus saving large numbers of allied lives during the invasion. This successful deception involved an orchestrated effort to induce in an enemy a belief that was false and harmful to that enemy. It thus fits squarely within all three – epistemic, behavioural and security – framings.

The context of this illustration is, of course, markedly different from that of contemporary concerns – which serves to point up how current usage lacks comparable perspicuity. In the wartime case, the instance of disinformation is clearly identified: there is no doubt who was deceived by whom about what or how. Furthermore, the situation is one of a declared war. Contemporary discussions of disinformation often refer to an ‘information war’, yet this is not a declared war with a declared enemy, and citizens may not know exactly where their allegiances are supposed to lie or, indeed, why matters of truth and falsehood should anyway be considered matters of allegiance rather than epistemics. Those aiming to ‘combat’ malinformation may claim to be doing it for the benefit of fellow citizens, but if we citizens are not aware of being in a war with a defined enemy this claim can be a rather perplexing one that we have reason to treat with caution.

In contemporary circumstances, then, the concept of disinformation has undergone significant loosening. It could still be cogently applied in these different circumstances, but only for cases where all of the three conditions necessary for unequivocal identification of it can be met.

3

Under today’s circumstances, it is hard to find cases where the three conditions are all uncontroversially met. For while it may be relatively easy to identify cases of intentional deception, where not just straight falsehoods but also lies by omission and selective truths are deployed, the assessment of harm is a different kind of question. What counts as harm, and serious enough to warrant attention, depends on what baselines of well-being are assumed. One would think that in a democracy such questions would be an important topic of public debate. But what is happening, instead, is that security concerns are being prioritised over concerns about truth.

As noted at the start of this article, governments, journalists and even academics are promoting the idea that disinformation is a pre-eminent threat of our time. The fact that people are less influenced by disinformation than is supposed by those sounding an alarm about it is rarely acknowledged; nor is the fact that such alleged threats as ‘Russian Disinformation’, which are taken as paradigmatic by a good many authors (with a Google Scholar search of academic literature since 2016 for “Russian disinformation” yielding 3,460 items),  have negligible effects – even when they are not straightforwardly fictitious. The effects of these warnings about the dangers of disinformation do have influence on the public however. And this influence is sought as a matter of policy by Western states, including through their security services, which have become centrally involved in framing disinformation as a security threat. For instance, in Britain, the respective heads of MI5, MI6, GCHQ and the British Army’s 77th Brigade have all explicitly declared themselves participants in an ‘information war’. In the U.S., the Department of Homeland Security – which was originally created to coordinate the War on Terror – now prominently features a Cybersecurity and Infrastructure Security Agency (CISA), whose director, Jen Easterly, maintains that ‘the most critical infrastructure is our cognitive infrastructure’. She set up a dedicated Subcommittee to advise on Misinformation, Disinformation, Malinformation (MDM). MDM was chaired by University of Washington academic Kate Starbird and it included Stanford University’s Alex Stamos.

It has since come to light that this ‘advice’ was not only grounded in erroneous analysis but was applied in such a way as to undermine the fundamental constitutional guarantees protecting free speech. One notorious instance of this is the Election Integrity Partnership (EIP) that was set up by Starbird and Stamos ahead of the 2020 presidential election, at the instigation of CISA, supposedly ‘to defend our elections against those who seek to undermine them by exploiting weaknesses in the online information environment.’ It focused particularly on unconfirmed information circulating about election irregularities, since these were regarded as a threat to the institutions of democracy. Student volunteers from the University of Washington and Stanford University were tasked with flagging social media posts they deemed disinformation so that the platforms might take action – such as deamplifying or suppressing them. According to EIP director Stamos, their goal was ‘if we’re able to find disinformation, that we’ll be able to report it quickly and then collaborate with them on taking it down.’ (Hines v. Stamos para.40) The posts in question conspicuously included conversations amongst US citizens. This surveillance and censorship of domestic actors – in breach of US First Amendment protections – marked a shift away from the original understanding of disinformation as a security threat presented by hostile foreign actors.  

A further shift was particularly noticeable with EIP’s successor, the Virality Project (VP), whose core activity was to flag for platform action posts that called into question the safety, efficacy or necessity of the COVID-19 vaccines. Like EIP, this monitored domestic communications, (and has similarly been impugned in US courts) but whereas EIP still operated with a tacit assumption that a particular hostile agency – albeit not necessarily foreign – could be coordinating at least some of the messaging, since there were identifiable political interests at stake, VP effectively conceptualised disinformation itself as the enemy. The team did not identify any hostile principal directing the putative disinformation operation, or even imply one: they professed concern about a ‘highly-networked and coordinated anti-vaccine community’, but they did not posit any particular principal, domestic or foreign, as orchestrating or directing it. Nor did they claim that the ‘anti-vaccine community’ had hostile intent. So, what they were concerned with might have been more aptly be described, by reference to the framework above, as misinformation, rather than disinformation.

Except, and this is a decisive third shift from the paradigm case of disinformation, the information being flagged by VP was by no means always false. This epistemic unreliability of the operation was a predictable consequence of the fact that neither the students from Computer Studies and International Relations who were doing the flagging, nor the ‘experts in disinformation’ who had seconded them, could claim any expertise in matters of public health or immunology. In fact, it was frankly admitted by EIP/VP project manager Isabella Garcia-Camargo, a former CISA student intern, that ‘it’s really unclear who we’re supposed to be engaging with to actually deliver the counternarratives’. Yet her team of students were making snap judgements that led to censorship and stigmatisation of globally eminent epidemiologists like Jay Bhattacharya, Sunetra Gupta and Martin Kulldorff  – a matter that has been making its way through the US courts and is currently before the US Supreme Court. For there is ample evidence that the Virality Project was labelling as ‘disinformation’ communications from genuinely expert researchers who simply dissented from the orthodox views that state security entities like CISA endorsed as authoritative. Political authority, in other words, has here displaced epistemic authority as the basis for identifying disinformation. This diametrically opposes the situation in the original and paradigm case of disinformation, where political authority of the state would be exercised in enabling the most exacting epistemic diligence possible.

One gets an impression, from the documentation now available thanks to FOIA requests and subpoenas, that the aim of contemporary counter-disinformation operations is not so much to prevent people being misled as to prevent them from being influenced by unauthorised narratives. The assumption is that certain unauthorised narratives – such as those of ‘anti-vaxxers’, in the VP case – are harmful, even when they are not untrue.

This is not the place to comment on the substance of such assumptions, but the conceptual point to highlight is that what is at issue in such cases is not a matter of deception but a matter of what is quite distinctly designated as malinformation. This clear distinction was recognized within the US Department of Homeland Security. For in private communications within CISA’s MDM subcommittee, the issue was raised ahead of publishing their report in which, as a result, all references to MDM were amended to read MD, sans malinformation (see my earlier discussion). But, as Starbird nevertheless noted, this did not affect the fact that malinformation remained very much within their purview.

We see, then, that some of those whose primary concern is with combatting disinformation, framed as a threat to society, have come to be operating with an interpretation of the concept which departs radically from that assumed in the original context of its use. Where once disinformation was seen as a practice of deception covertly conducted by a hostile foreign agent, it is now depicted as a practice carried out in public by domestic citizens without any intent to deceive, and even without actually deceiving at all. In the wartime context, the practice being criticized in this latter vein – insofar as it involves malinformation – would have been described as propaganda. Specifically, malinformation was used in the propaganda produced by a distinct branch of military operations. Thus the US military had a department of Morale Operations, modelled on the British Political Warfare Executive, whose aim was not so much to deceive the enemy command as to demoralise its citizens and footsoldiers.

In any situation of conflict, it is vital to know who is truly on your side. In the current war on disinformation this is anything but clear. Efforts to ‘combat’ disinformation may present a greater threat to the democratic institutions of society than does disinformation itself. The threat posed by disinformation to the legitimacy or security of democratic institutions is overstated in a literature that prioritises ‘combatting’ communications whose content may not have been proven epistemically unreliable but is simply at odds with ‘mainstream truths’. A well-functioning democratic system should not only be able to accommodate dissent but even flourish from it.      

4

Pervasive suspicions of disinformation, according to the argument offered here, could be symptomatic of a society that does not fit the description of a well-functioning democracy. Moral panic about ‘disinformation’ appears then as a response to the symptom which avoids facing up to the real problem. Insofar as complaints focus on the undermining of trust in institutions, they are misdirected if they take the problem to lie with the people whose trust has waned and who have become influenced by dissident perspectives on institutionalised untrustworthiness. Such misdirected complaints amount to a mystification of the problem. When trust in institutions is lacking, the solution is not to engage in strategic communications aimed at making the population more trusting. The solution is to transform the institutions so as make them more worthy of trust.

This entry was posted in disinformation, media, propaganda, UK Government, Uncategorized and tagged , , , . Bookmark the permalink.

4 Responses to The Real Problem of Disinformation

  1. barovsky says:

    Shouldn’t that be alleged online disinformation?

  2. Ronald Watson says:

     In response to Tim Hayward’s essay on ” The Real Problem of Disinformation “, in my opinion the Real Problem is the fact that our Politicians ( Governments ) have always been made up of LIARS. This aptitude for lying being one of the prerequisites for getting elected. For this reason alone we all should now realize that Confrontational Government as practiced under the Party System of Government is not DEMOCRATIC but rather based on whom lies best, promises the most to the people who make up the voting constituents in any given riding. A prime example of this is the current war in GAZA between the Israelis and Palestinians.

      It is well known by the students of both the USA, UK, Canada, France ( by the way France has now banned pro-Palestinian marches ), Germany, Switzerland, the list of countries goes on and on. Some countries such as Germany, Switzerland have tried to BAN these pro-Palestinian protests as being ANTI-SEMETIC. I DO NOT see ANTI-MUSLIM ( Palestinian ) protestors being arrested in these countries in particular the USA and Canada. The Media have bought into this disinformation ( propaganda ) by our so called Democratic Governments by continually reporting on the number of Pro-Palestinian students arrested on our National News Broadcasts.

      It is well known that Israel is committing GENOCIDE and LAND THEFT against the Palestinian people. I and those Pro-Palestinian students are well aware of this crime as is the majority of the UNITED NATIONS GENERAL ASSEMBLY, yet the Israelis keep up their relentless WAR CRIMES and all our respective governments do is utter / whisper STOP Israel please for fear of offending the JEWS and being labeled Anti-Semite. To hell with Israel, call a spade a spade !

     This current situation harkens back to the Student Protests in the USA against the War in Vietnam and I watched as the US Government ( US National Guard I believe ) fire upon the PEACEFUL PROTESTORS killing many. It was do to the outrage of the American People in general against this atrocity which finally ended the US / Vietnam War.

      I have used both the current situation in GAZA and with retrospect the US / Vietnam War as PRIME EXAMPLES of DISINFORMATION as practiced by Governments and the Media to try and sow disinformation among its citizens. It no longer works so well in the 21st. Century.

      Until we do away with Political Parties and Confrontational Governments and move towards CONSENSUS GOVERNMENT where elected INDEPENDENT individuals who are responsible to their Constituents form Governments will we have true DEMOCRACY without the propaganda and misinformation that currently is rampant.

    … Ron Watson

    WordPress.com / Gravat

  3. So is it reasonable to ask whether malinformation is the only way to create institutions more worthy of trust? #AFAF

Leave a comment