The identify of an object is a choice, a way of looking at it. The “right” way of making this choice is the way that best achieves your values.
I think that’s really the central point. The metaphysical principles which either allow or deny the “intrinsic philosophical risk” mentioned in the OP are not like theorems or natural laws, which we might hope some day to corroborate or refute—they’re more like definitions that a person either adopts or does not.
I don’t see either as irrational
I have to part company here—I think it is irrational to attach ‘terminal value’ to your biological substrate (likewise paperclips), though it’s difficult to explain exactly why. Terminal values are inherently irrational, but valuing the continuance of your thought patterns is likely to be instrumentally rational for almost any set of terminal values, whereas placing extra value on your biological substrate seems like it could only make sense as a terminal value (except in a highly artificial setting e.g. Dr Evil has vowed to do something evil unless you preserve your substrate).
Of course this raises the question of why the deferred irrationality of preserving one’s thoughts in order to do X is better than the immediate irrationality of preserving one’s substrate for its own sake. At this point I don’t have an answer.
The metaphysical principles which either allow or deny the “intrinsic philosophical risk” mentioned in the OP are not like theorems or natural laws, which we might hope some day to corroborate or refute—they’re more like definitions that a person either adopts or does not.
I don’t understand the question, but perhaps I can clarify a little:
I’m trying to say that (e.g.) analytic functionalism and (e.g.) property dualism are not like inconsistent statements in the same language, one of which might be confirmed or refuted if only we knew a little more, but instead like different choices of language, which alter the set of propositions that might be true or false.
It might very well be that the expanded language of property dualism doesn’t “do” anything, in the sense that it doesn’t help us make decisions.
OK, the problem I was getting at is that adopting a definition usually has consequences that make some definitions better than others, thus not exempting them from criticism, with implication of their usefulness still possible to refute.
I agree that definitions (and expansions of the language) can be useful or counterproductive, and hence are not immune from criticism. But still, I don’t think it makes sense to play the Bayesian game here and attach probabilities to different definitions/languages being correct. (Rather like how one can’t apply Bayesian reasoning in order to decide between ‘theory 1’ and ‘theory 2’ in my branching vs probability post.) Therefore, I don’t think it makes sense to calculate expected utilities by taking a weighted average over each of the possible stances one can take in the mind-body problem.
Gosh, that’s not useful in practice far more widely than that, and not at all what I suggested. I object to exempting any and all decisions from potential to be incorrect, no matter what tools for noticing the errors are available or practical or worth applying.
I think that’s really the central point. The metaphysical principles which either allow or deny the “intrinsic philosophical risk” mentioned in the OP are not like theorems or natural laws, which we might hope some day to corroborate or refute—they’re more like definitions that a person either adopts or does not.
I have to part company here—I think it is irrational to attach ‘terminal value’ to your biological substrate (likewise paperclips), though it’s difficult to explain exactly why. Terminal values are inherently irrational, but valuing the continuance of your thought patterns is likely to be instrumentally rational for almost any set of terminal values, whereas placing extra value on your biological substrate seems like it could only make sense as a terminal value (except in a highly artificial setting e.g. Dr Evil has vowed to do something evil unless you preserve your substrate).
Of course this raises the question of why the deferred irrationality of preserving one’s thoughts in order to do X is better than the immediate irrationality of preserving one’s substrate for its own sake. At this point I don’t have an answer.
What do the definitions do?
I don’t understand the question, but perhaps I can clarify a little:
I’m trying to say that (e.g.) analytic functionalism and (e.g.) property dualism are not like inconsistent statements in the same language, one of which might be confirmed or refuted if only we knew a little more, but instead like different choices of language, which alter the set of propositions that might be true or false.
It might very well be that the expanded language of property dualism doesn’t “do” anything, in the sense that it doesn’t help us make decisions.
OK, the problem I was getting at is that adopting a definition usually has consequences that make some definitions better than others, thus not exempting them from criticism, with implication of their usefulness still possible to refute.
I agree that definitions (and expansions of the language) can be useful or counterproductive, and hence are not immune from criticism. But still, I don’t think it makes sense to play the Bayesian game here and attach probabilities to different definitions/languages being correct. (Rather like how one can’t apply Bayesian reasoning in order to decide between ‘theory 1’ and ‘theory 2’ in my branching vs probability post.) Therefore, I don’t think it makes sense to calculate expected utilities by taking a weighted average over each of the possible stances one can take in the mind-body problem.
Gosh, that’s not useful in practice far more widely than that, and not at all what I suggested. I object to exempting any and all decisions from potential to be incorrect, no matter what tools for noticing the errors are available or practical or worth applying.