There’s a common criticism of theory-criticism which goes along the lines of:
Well, sure, this theory isn’t exactly right. But it’s the best theory we have right now. Do you have a better theory? If not, you can’t really claim to have refuted the theory, can you?
This is wrong. This is falsification-resisting theory-apologism. Karl Popper would be livid.
The relevant reason why it’s wrong is that theories make high-precision claims. For example, the standard theory of arithmetic says 561+413=974. Not 975 or 973 or 97.4000001, but exactly 974. If arithmetic didn’t have this guarantee, math would look very different from how it currently looks (it would be necessary to account for possible small jumps in arithmetic operations).
A single bit flip in the state of a computer process can crash the whole program. Similarly, high-precision theories rely on precise invariants, and even small violations of these invariants sink the theory’s claims.
To a first approximation, a computer either (a) almost always works (>99.99% probability of getting the right answer) or (b) doesn’t work (<0.01% probability of getting the right answer). There are edge cases such as randomly crashing computers or computers with small floating point errors. However, even a computer that crashes every few minutes functions very precisely correctly in >99% of seconds that it runs.
If a computer makes random small errors 0.01% of the time in e.g. arithmetic operations, it’s not an almost-working computer, it’s a completely non-functioning computer, that will crash almost immediately.
The claim that a given algorithm or circuit really adds two numbers is very precise. Even a single pair of numbers that it adds incorrectly refutes the claim, and very much risks making this algorithm/circuit useless. (The rest of the program would not be able to rely on guarantees, and would instead need to know the domain in which the algorithm/circuit functions; this would significantly complicate the reasoning about correctness)
Importantly, such a refutation does not need to come along with an alternative theory of what the algorithm/circuit does. To refute the claim that it adds numbers, it’s sufficient to show a single counterexample without suggesting an alternative. Quality assurance processes are primarily about identifying errors, not about specifying the behavior of non-functioning products.
A Bayesian may argue that the refuter must have an alternative belief about the circuit. While this is true assuming the refuter is Bayesian, such a belief need not be high-precision. It may be a high-entropy distribution. And if the refuter is a human, they are not a Bayesian (that would take too much compute), and will instead have a vague representation of the circuit as “something doing some unspecified thing”, with some vague intuitions about what sorts of things are more likely than other things. In any case, the Bayesian criticism certainly doesn’t require the refuter to replace the claim about the circuit with an alternative high-precision claim; either a low-precision belief or a lack-of-belief will do.
The case of computer algorithms is particularly clear, but of course this applies elsewhere:
If there’s a single exception to conservation of energy, then a high percentage of modern physics theories completely break. The single exception may be sufficient to, for example, create perpetual motion machines. Physics, then, makes a very high-precision claim that energy is conserved, and a refuter of this claim need not supply an alternative physics.
If a text is claimed to be the word of God and totally literally true, then a single example of a definitely-wrong claim in the text is sufficient to refute the claim. It isn’t necessary to supply a better religion; the original text should lose any credit it was assigned for being the word of God.
If rational agent theory is a bad fit for effective human behavior, then the precise predictions of microeconomic theory (e.g. the option of trade never reducing expected utility for either actor, or the efficient market hypothesis being true) are almost certainly false. It isn’t necessary to supply an alternative theory of effective human behavior to reject these predictions.
If it is claimed philosophically that agents can only gain knowledge through sense-data, then a single example of an agent gaining knowledge without corresponding sense-data (e.g. mental arithmetic) is sufficient to refute the claim. It isn’t necessary to supply an alternative theory of how agents gain knowledge for this to refute the strongly empirical theory.
If it is claimed that hedonic utility is the only valuable thing, then a single example of a valuable thing other than hedonic utility is sufficient to refute the claim. It isn’t necessary to supply an alternative theory of value.
A theory that has been refuted remains contextually “useful” in a sense, but it’s the walking dead. It isn’t really true everywhere, and:
Machines believed to function on the basis of the theory cannot be trusted to be highly reliable
Exceptions to the theory can sometimes be manufactured at will (this is relevant in both security and philosophy)
The theory may make significantly worse predictions on average than a skeptical high-entropy prior or low-precision intuitive guesswork, due to being precisely wrong rather than imprecise
Generative intellectual processes will eventually discard it, preferring instead an alternative high-precision theory or low-precision intuitions or skepticism
The theory will go on doing damage through making false high-precision claims
The fact that false high-precision claims are generally more damaging than false low-precision claims is important ethically. High-precision claims are often used to ethically justify coercion, violence, and so on, where low-precision claims would have been insufficient. For example, imprisoning someone for a long time may be ethically justified if they definitely committed a serious crime, but is much less likely to be if the belief that they committed a crime is merely a low-precision guess, not validated by any high-precision checking machine. Likewise for psychiatry, which justifies incredibly high levels of coercion on the basis of precise-looking claims about different kinds of cognitive impairment and their remedies.
Therefore, I believe there is an ethical imperative to apply skepticism to high-precision claims, and to allow them to be falsified by evidence, even without knowing what the real truth is other than that it isn’t as the high-precision claim says it is.
High-precision claims may be refuted without being replaced with other high-precision claims
Link post
There’s a common criticism of theory-criticism which goes along the lines of:
This is wrong. This is falsification-resisting theory-apologism. Karl Popper would be livid.
The relevant reason why it’s wrong is that theories make high-precision claims. For example, the standard theory of arithmetic says 561+413=974. Not 975 or 973 or 97.4000001, but exactly 974. If arithmetic didn’t have this guarantee, math would look very different from how it currently looks (it would be necessary to account for possible small jumps in arithmetic operations).
A single bit flip in the state of a computer process can crash the whole program. Similarly, high-precision theories rely on precise invariants, and even small violations of these invariants sink the theory’s claims.
To a first approximation, a computer either (a) almost always works (>99.99% probability of getting the right answer) or (b) doesn’t work (<0.01% probability of getting the right answer). There are edge cases such as randomly crashing computers or computers with small floating point errors. However, even a computer that crashes every few minutes functions very precisely correctly in >99% of seconds that it runs.
If a computer makes random small errors 0.01% of the time in e.g. arithmetic operations, it’s not an almost-working computer, it’s a completely non-functioning computer, that will crash almost immediately.
The claim that a given algorithm or circuit really adds two numbers is very precise. Even a single pair of numbers that it adds incorrectly refutes the claim, and very much risks making this algorithm/circuit useless. (The rest of the program would not be able to rely on guarantees, and would instead need to know the domain in which the algorithm/circuit functions; this would significantly complicate the reasoning about correctness)
Importantly, such a refutation does not need to come along with an alternative theory of what the algorithm/circuit does. To refute the claim that it adds numbers, it’s sufficient to show a single counterexample without suggesting an alternative. Quality assurance processes are primarily about identifying errors, not about specifying the behavior of non-functioning products.
A Bayesian may argue that the refuter must have an alternative belief about the circuit. While this is true assuming the refuter is Bayesian, such a belief need not be high-precision. It may be a high-entropy distribution. And if the refuter is a human, they are not a Bayesian (that would take too much compute), and will instead have a vague representation of the circuit as “something doing some unspecified thing”, with some vague intuitions about what sorts of things are more likely than other things. In any case, the Bayesian criticism certainly doesn’t require the refuter to replace the claim about the circuit with an alternative high-precision claim; either a low-precision belief or a lack-of-belief will do.
The case of computer algorithms is particularly clear, but of course this applies elsewhere:
If there’s a single exception to conservation of energy, then a high percentage of modern physics theories completely break. The single exception may be sufficient to, for example, create perpetual motion machines. Physics, then, makes a very high-precision claim that energy is conserved, and a refuter of this claim need not supply an alternative physics.
If a text is claimed to be the word of God and totally literally true, then a single example of a definitely-wrong claim in the text is sufficient to refute the claim. It isn’t necessary to supply a better religion; the original text should lose any credit it was assigned for being the word of God.
If rational agent theory is a bad fit for effective human behavior, then the precise predictions of microeconomic theory (e.g. the option of trade never reducing expected utility for either actor, or the efficient market hypothesis being true) are almost certainly false. It isn’t necessary to supply an alternative theory of effective human behavior to reject these predictions.
If it is claimed philosophically that agents can only gain knowledge through sense-data, then a single example of an agent gaining knowledge without corresponding sense-data (e.g. mental arithmetic) is sufficient to refute the claim. It isn’t necessary to supply an alternative theory of how agents gain knowledge for this to refute the strongly empirical theory.
If it is claimed that hedonic utility is the only valuable thing, then a single example of a valuable thing other than hedonic utility is sufficient to refute the claim. It isn’t necessary to supply an alternative theory of value.
A theory that has been refuted remains contextually “useful” in a sense, but it’s the walking dead. It isn’t really true everywhere, and:
Machines believed to function on the basis of the theory cannot be trusted to be highly reliable
Exceptions to the theory can sometimes be manufactured at will (this is relevant in both security and philosophy)
The theory may make significantly worse predictions on average than a skeptical high-entropy prior or low-precision intuitive guesswork, due to being precisely wrong rather than imprecise
Generative intellectual processes will eventually discard it, preferring instead an alternative high-precision theory or low-precision intuitions or skepticism
The theory will go on doing damage through making false high-precision claims
The fact that false high-precision claims are generally more damaging than false low-precision claims is important ethically. High-precision claims are often used to ethically justify coercion, violence, and so on, where low-precision claims would have been insufficient. For example, imprisoning someone for a long time may be ethically justified if they definitely committed a serious crime, but is much less likely to be if the belief that they committed a crime is merely a low-precision guess, not validated by any high-precision checking machine. Likewise for psychiatry, which justifies incredibly high levels of coercion on the basis of precise-looking claims about different kinds of cognitive impairment and their remedies.
Therefore, I believe there is an ethical imperative to apply skepticism to high-precision claims, and to allow them to be falsified by evidence, even without knowing what the real truth is other than that it isn’t as the high-precision claim says it is.