This is a draft
Misinformation is consensually defined as untruthful, misleading messages. We would rather have information than misinformation.
Meanwhile, the task of defining information remains classified as a challenge or a matter of context. 21 years ago, Wikipedia started. The same year, an outline for the encyclopedic Wikipedia article on ‘Information’ was drafted. 21 years of discussion later, Wikipedians are still trying to agree on what information is.
“Conceptually, information is the message (utterance or expression) being conveyed. Therefore, in a general sense, information is “Knowledge communicated or received concerning a particular fact or circumstance”.”
“Viewing information as the conveyor of a message contradicts my experience on how this word is used. But your experience may be different from mine.
“Information, in its most restricted technical sense, is a sequence of symbols that can be interpreted as a message. Information can be recorded as signs, or transmitted as signals.”
The main concern around misinformation seems to be to eradicate it, which entails telling misinformation from information — so telling misinformation from something we’re not confident to define. Meekly, we resort to assuming someone else than us —”the experts”— have the confidence to define information for us and to weed out the misinformation.
When misinformation is regarded as a problem, it is regarded as a communication problem rather than an information problem. I think it’s worth making the distinction.
A communication problem is one where there is a discrepancy between the message emitted and the message received or a discrepancy between the intended recipient and the actual recipient, or possibly lack of clarity regarding where the message comes from. Typically, this kind of problem is solved by adding mechanisms to verify the transaction between emitter and receiver. The type of “mechanisms” added depend on the kind of message being communicated. Messages related to financial operations need cybersecurity technology. Marital communication problems, need tricks that a couple therapist might suggest. The solution to misinformation — the communication problem — is checking the provenance of messages and the trustworthiness/authority of their sources, possibly deleting the message altogether, and replacing it with another.
Framing misinformation as a communication problem entails resorting to the argument from authority in the solution. Asserting the prevalence of the argument from authority is exactly the worst possible thing that can possibly be done for promoting critical thinking in the general public.
As a scientist, as a theoretical computer scientist, the argument from authority is my worst nightmare. It is the gate at which my intellectual spunk dies from the hypostimulation of another mind I’m trying to interact with. It’s where intellectual activity goes to die, handing over the intellectual life torch to an ethreal “more expert mind”.
I hate the argument from authority. There are plenty commendable tendencies in common reasoning. But the argument from authority incarnates everything that is wrong with common reasoning. It’s okay sometimes to defer to reasoning performed at a different time and place by someone different, under different conditions. It’s not okay to do it without putting all those differences on the table for inspection. By inspection I don’t mean the mindless task of determining what makes the information more trustworthy, what makes it less. Evaluating information isn’t like sorting dirty laundry, grading how much it smells like feet.
Information is contextual. If you take it out of the context in which it was built to make sense, you’ve done something to the information. It no longer is the same information. Perhaps it’s close enough. But perhaps it isn’t. You can’t just inspect where the information came from, you must also inspect how it traveled. This is no small task for an active analytical mind. It’s an absolute impossibility for anyone who has given up the intellectual life torch through the argument from authority.
Authority denigration is a tragic symptom of overexposure to the argument from authority. Authority denigration is fishing for reasons to defend our beliefs and continue distrusting the things we are prepared and motivated to distrust. Invoking authorities / third-parties (experts) as “gods of information” has consequences. It means that if we don’t like the information we are given, or if we simply don’t get it, then there’s nothing left for us to do than invoke different gods. The remaining way for us to apply free will is in choosing our information gods. Unsurprisingly, that’s what is happening now.
Not only does the argument from authority deprive me from active analytical minds with which to interact, by making information a matter of experts, it also devastates respect for expertise.
Now what is an information problem?
Precisely, we are missing a consensual definition of what information is. We can’t expect to easily agree on what an information problem is. We’re not there yet.
Missing consensual, fully formalised analytical definitions shouldn’t stop us from pushing our imaginations and our logic. We might not understand what information is, but we can still raise questions about it, and take the time to explore the notion from different angles in the meantime.
The main raison d’être of this post is for me to ask you to please take some time to consider the following question:
Where does one find information problems tackled as information problems rather than as communication problems?
A second raison d’être of this post is for me to suggest looking in the whereabouts of scientists’ whiteboards. It’s a good place to start: where the art of detecting the imperfections in imperfect information is practiced on a daily basis, and where troubleshooting information is constructive rather than sacramental.
This raises the next question: why is it “constructive”? Why do scientists manage to consistently produce quality information out of bits of outdated and incomplete information? Why does scientific reasoning produce concrete results such as vaccines and automatic translation? Why does science work?
Reasons are as far as it gets from trust.
Science works because scientists use very simple tools. And they make sure to apply those tools in very simple situations that they master. The world is very complex, but no scientist ever deals with all the complexity in the world. Practising science means resisting all urges to bite off more than one can chew in terms of information. Locally, this seams to be very unambitious, but globally, it makes for a very reliable process that works.
Note that there are many scientists. At first sight it might seem more efficient in terms of scientific progress, to have them all work independently on different scientific questions. That would make sense if scientists made shoes. We wouldn’t want two workers hammering the same nail into the same sole. But science is something different from shoes. Making reliable information has a primary requirement: that trust not be used as an ingredient in producing information. In every day life, we are constantly resorting to trust. So the requirement in science to do without trust calls for an exceptional workaround:
In science, we don’t trust. We agree instead.
Science works because humans can agree and when they do, even only momentarily, they manage to do things together that they wouldn’t otherwise, if only, they manage to momentarily stand on the same ground or look in the same direction and make sense of what they see in relatable ways. The industry that systematically leverages the ability of humans to agree is called science (mastery of the art of human agreement is called computer science 😉 ). Circumscribing objects of agreement is the basis of science-making. Usually agreement requires work.
Humans also have the ability to disagree. Mastering the art of human agreement means excelling at circumscribing objects of agreement, and knowing how to work with bordering disagreement and contradiction. Disagreement is an opportunity to progress together — as long as it isn’t found too far out from a well-circumscribed area of agreement. Agreement is a prerequisite providing solid common ground we can build on and tether bordering disagreement to.
Here are my two concluding cents:
Smothering disagreement with trust is not encouraging the distrustful building of agreement.
Fighting against misinformation is not fighting for information.
Tackling misinformation as a communication problem is not tackling it as an information problem.
We are bound to find better solutions if we stop wallowing in clichés.