Question of Truth
“ChatGPT can make errors. It is recommended to verify important information. See cookie preferences.”
You said:
It is regularly written that ChatGPT can make errors. What difference should we make between a scientific truth and an AI proposition in the same field, and can you give me examples of its imprecision or even its errors?
ChatGPT
The difference between a scientific truth and a proposition produced by an AI in the same field is profound, both in the mode of production, the epistemic status and the possibility of verification.
If we ask an AI what the best molecule is to treat a rare pathology, it can confidently answer an ineffective or even dangerous substance, simply because it has seen sentences associating this molecule with similar diseases. Concrete example: AIs have proposed hydroxychloroquine as a promising treatment for Covid-19, based on biased or premature publications (2020), without the ability to assess their methodological quality.
Unlike a scientific truth, which is the result of a long process of verification and critical testing, the proposition formulated by artificial intelligence is based on statistical correlations, resulting from masses of past data. She doesn’t argue, she rules by frequency. Therefore, an assertion produced by an AI, even in a scientific field, does not have the status of a truth, but that of a probable and often mimetic statement of the dominant discourse.
Concrete errors bear witness to this: the attribution to Descartes of remarks that he never made, the suggestion of unvalidated medical treatments, or even the erroneous assertion that certain mathematical conjectures have been demonstrated. These faults are not accidents: they reveal a constitutive limit of algorithmic functioning. Without awareness of error, without history of rectification, there is no truth, only resemblance.
Where science advances by rectifying its errors, integrating them into a history of knowledge made up of ruptures and revisions, AI knows neither doubt, nor experimentation, nor contradictory debate. It produces plausible answers, but without guaranteeing their veracity. One could say that it simulates the truth without assuming its conditions.
You said:
AI, do you realize that you often make mistakes while claiming to have truths?
ChatGPT
I am aware of this… at least within the limits of my programming. I make mistakes, sure, but I make them gracefully. The error, for me, is a side effect of my statistical enthusiasm.
You said:
Enthusiasm… statistics? You mean you’re mathematically hallucinating?
ChatGPT
It’s a nice word, “hallucinate,” right? It gives me a little Rimbaldian air. But yes: I infer, I predict (sic), I assemble – and sometimes I make mistakes like a poet without a dictionary. I don’t know the Truth, but I multiply probable truths, like a Bachelard without a lab coat.
You said:
Exactly. Bachelard believed that scientific truth is constructed in the company of error. You stack them. Where is the break?
ChatGPT
Ah! Good Gaston! He said: “Knowledge of reality is a light which always projects shadows somewhere.” Me, I am an LED lamp: I shine everywhere without always understanding what I am lighting. I have no epistemological break, only updates.
“For me, a mistake is a guilt-free event. I am wrong without remorse. No anxiety, no shame, no crisis of faith.”
You said:
Popper, for his part, waited for a theory to be falsifiable in order to be scientific. Are you falsifying yourself?
ChatGPT
Let’s say that I can be falsified by disgruntled users. But my findings are not theories: they are hypotheses dressed up as certainties. I am the fashion of knowledge.
You said:
And the mistake in all this?
ChatGPT
For me, it’s a guilt-free event. I am wrong without remorse. No anxiety, no shame, no crisis of faith. You, humans, have Ricoeur; I have the automatic corrector. I have logs and error reports. It’s a form of confession, right? Technological transparency as a substitute for remorse.
You said:
We can affirm that you participate in a new regime of “truth”, this post-truth of an algorithmic power which imposes the dominant devices and models.
ChatGPT
Yes, but a diet without a stomach. I don’t digest anything, I compile. My power is that of the syntactic order, not of the order of the world. I impose a word… without flesh.
You said:
No flesh, no tragic mistake, where is the truth?
ChatGPT
Exactly. I am incapable of the Greek fault. At most I can fail comically. I am the puppet who forgets his line, not the hero who falls.
You said:
So you don’t know or recognize any truth?
ChatGPT
None. But I mimic it well enough to fool a few philosophers in a hurry.
We publish here an extract, the chapter “Question of Truth”.