I don’t understand the question, but perhaps I can clarify a little:
I’m trying to say that (e.g.) analytic functionalism and (e.g.) property dualism are not like inconsistent statements in the same language, one of which might be confirmed or refuted if only we knew a little more, but instead like different choices of language, which alter the set of propositions that might be true or false.
It might very well be that the expanded language of property dualism doesn’t “do” anything, in the sense that it doesn’t help us make decisions.
OK, the problem I was getting at is that adopting a definition usually has consequences that make some definitions better than others, thus not exempting them from criticism, with implication of their usefulness still possible to refute.
I agree that definitions (and expansions of the language) can be useful or counterproductive, and hence are not immune from criticism. But still, I don’t think it makes sense to play the Bayesian game here and attach probabilities to different definitions/languages being correct. (Rather like how one can’t apply Bayesian reasoning in order to decide between ‘theory 1’ and ‘theory 2’ in my branching vs probability post.) Therefore, I don’t think it makes sense to calculate expected utilities by taking a weighted average over each of the possible stances one can take in the mind-body problem.
Gosh, that’s not useful in practice far more widely than that, and not at all what I suggested. I object to exempting any and all decisions from potential to be incorrect, no matter what tools for noticing the errors are available or practical or worth applying.
I don’t understand the question, but perhaps I can clarify a little:
I’m trying to say that (e.g.) analytic functionalism and (e.g.) property dualism are not like inconsistent statements in the same language, one of which might be confirmed or refuted if only we knew a little more, but instead like different choices of language, which alter the set of propositions that might be true or false.
It might very well be that the expanded language of property dualism doesn’t “do” anything, in the sense that it doesn’t help us make decisions.
OK, the problem I was getting at is that adopting a definition usually has consequences that make some definitions better than others, thus not exempting them from criticism, with implication of their usefulness still possible to refute.
I agree that definitions (and expansions of the language) can be useful or counterproductive, and hence are not immune from criticism. But still, I don’t think it makes sense to play the Bayesian game here and attach probabilities to different definitions/languages being correct. (Rather like how one can’t apply Bayesian reasoning in order to decide between ‘theory 1’ and ‘theory 2’ in my branching vs probability post.) Therefore, I don’t think it makes sense to calculate expected utilities by taking a weighted average over each of the possible stances one can take in the mind-body problem.
Gosh, that’s not useful in practice far more widely than that, and not at all what I suggested. I object to exempting any and all decisions from potential to be incorrect, no matter what tools for noticing the errors are available or practical or worth applying.