Perhaps I’m mistaken about this, but isn’t a far stronger argument in favor of a consistent belief system the fact that with inconsistent axioms you can derive any result you want? In an inconsistent belief system you can rationalize away any act you intend to take, and in fact this has often been seen throughout history.
In theory, yes. In practice, … maybe. Like saying “a human can implement a bounded TM and can in principle, without tools other than paper&pencil, compute a prime number with a million digits”.
It depends on how inconsistent the axioms are in practice. If the contradictions are minor, before leveraging that contradiction to derive arbitrary results, the hu-man may die of old age.
It depends on how inconsistent the axioms are in practice. If the contradictions are minor, before leveraging that contradiction to derive arbitrary results, the hu-man may die of old age.
Of course, if the belief system in questions becomes popular, one of his disciples may wind up doing this.
Of course, this becomes a thousand times worse with a combination of materialism and the attitude that the world is made not of matter, but of conflicting agendas.
The friend in question wouldn’t buy that argument though, because rather than accepting as a premise that they hold inconsistent axioms, they would assert that they don’t apply things like axioms to their reasoning about social justice.
Plus, it’s not likely to reflect their impression of their own actions. They’re probably not trying to logically derive conclusions from a set of conflicting premises so much as they’re following their native moral instincts, which may be internally inconsistent, but certainly do not have unlimited elasticity of output. You can get an ordinary person to respond to the same moral dilemma in different ways by framing it differently, but there are some conclusions that they cannot be convinced to draw, and others that they will uphold consistently, so if they’re told that their belief system can derive any result, their response is likely to be “What? No it can’t.”
Perhaps I’m mistaken about this, but isn’t a far stronger argument in favor of a consistent belief system the fact that with inconsistent axioms you can derive any result you want? In an inconsistent belief system you can rationalize away any act you intend to take, and in fact this has often been seen throughout history.
In theory, yes. In practice, … maybe. Like saying “a human can implement a bounded TM and can in principle, without tools other than paper&pencil, compute a prime number with a million digits”.
It depends on how inconsistent the axioms are in practice. If the contradictions are minor, before leveraging that contradiction to derive arbitrary results, the hu-man may die of old age.
Of course, if the belief system in questions becomes popular, one of his disciples may wind up doing this.
Depends on your proof system.
Teehehehehe.
Of course, this becomes a thousand times worse with a combination of materialism and the attitude that the world is made not of matter, but of conflicting agendas.
The friend in question wouldn’t buy that argument though, because rather than accepting as a premise that they hold inconsistent axioms, they would assert that they don’t apply things like axioms to their reasoning about social justice.
Plus, it’s not likely to reflect their impression of their own actions. They’re probably not trying to logically derive conclusions from a set of conflicting premises so much as they’re following their native moral instincts, which may be internally inconsistent, but certainly do not have unlimited elasticity of output. You can get an ordinary person to respond to the same moral dilemma in different ways by framing it differently, but there are some conclusions that they cannot be convinced to draw, and others that they will uphold consistently, so if they’re told that their belief system can derive any result, their response is likely to be “What? No it can’t.”
In practice this tends to manifest as being able to rationalize any result.
They’ll tend to rationalize whatever results they output, but that doesn’t mean that they’ll output just any result.
Unfortunately the results they output tend to resemble this.