It’s enough that I don’t care about living for a very long time.
And if I say that you do, what is the criterion for telling which statement is the correct one? That criterion is what I referred to as the fact of the matter about what you (should) care about. And if there is a fact, there is possibility of being wrong about it.
Unless by “not caring about X” you by definition mean that there are statements being pronounced like “I don’t care about X”, or certain chemicals being released in your brain, you’d have to settle for not having absolutely privileged knowledge about what you actually care about.
As long as we can agree that whether someone cares about X is an empirically discoverable fact, then there seem to be two currently-possible methods of finding out what that is: introspection and viewing their actions (“revealed preference”).
There is no amount of evidence about facts external to a person that could possibly bear on whether they care about X. You might change whether they care about X by presenting external input, but that’s a rather different thing.
what I referred to as the fact of the matter about what you (should) care about
You mentioned the fact of the matter of what I should do. I would hold the fact of the matter of what I should care about in the same contempt. As for the fact of the matter of what I care about, you don’t know what you’re talking about.
The only reason why I’m replying at all is that this is a site dedicated to cognitive biases, and maybe you will cite an interesting post here about how I might be horribly wrong about what I care about. Of course I could be horribly wrong about my intermediate values, but the calculation is not coming out that way.
I think an issue here may be that your statement of caring or not caring about something doesn’t carry much weight when you not only are personally unfamiliar with X, but so is everyone else who ever lived.
You can truthfully state that you aren’t terribly attracted by what you imagine living a second life in Futurama is like; but your mental picture of it is likely to bear very very very little resemblance to what a potential second life will actually feel like. Since cryonics offers a small chance of giving you an actual future life, you should evaluate on the basis of that and therefore pay little attention to what your hopelessly flawed imagination suggests.
It’s like stating that you care/don’t care for a particular hallucinogen without having ever tried it, or any substance similar to it before, or having ever read reports by people who actually tried it. You don’t have a sufficient basis to make your model of it, about which you state a claim of caring/not caring, at all meaningful.
Cryonics costs time and money. The burden of proof is there. If I’m completely ignorant of what it would be like, then I will not spend any effort to bring it about. Byrnema said it well; let the resources be expended on a baby rather than on me.
I would kind of like to try LSD, because I know something about that, and what I’ve heard is mostly positive, when following certain guidelines. A random unknown hallucinogen would pose a danger of long-term health effects. So let hallucinogen X be one whose safety is guaranteed but whose other effects I’m completely ignorant of. Then I don’t care that I’ve never tried hallucinogen X. I am not going to spend time and money seeking it out.
It’s enough that I don’t care about living for a very long time.
And if I say that you do, what is the criterion for telling which statement is the correct one? That criterion is what I referred to as the fact of the matter about what you (should) care about.
This seems wrong to me. That Toby should X does not imply that Toby does X, so determining what Toby should want does not settle whether Toby in fact wants it.
you can’t claim that the thing you should do is to not care.
Toby does not seem to be making that claim, though perhaps implicitly so. (Much like it could be argued that “X” implies “I believe that X”, it could be argued that “I did X” implies “I should have done X”. But that fails on common usage, where “I did X but I should not have done X” is ordinary.)
And if I say that you do, what is the criterion for telling which statement is the correct one? That criterion is what I referred to as the fact of the matter about what you (should) care about. And if there is a fact, there is possibility of being wrong about it.
Unless by “not caring about X” you by definition mean that there are statements being pronounced like “I don’t care about X”, or certain chemicals being released in your brain, you’d have to settle for not having absolutely privileged knowledge about what you actually care about.
As long as we can agree that whether someone cares about X is an empirically discoverable fact, then there seem to be two currently-possible methods of finding out what that is: introspection and viewing their actions (“revealed preference”).
There is no amount of evidence about facts external to a person that could possibly bear on whether they care about X. You might change whether they care about X by presenting external input, but that’s a rather different thing.
You mentioned the fact of the matter of what I should do. I would hold the fact of the matter of what I should care about in the same contempt. As for the fact of the matter of what I care about, you don’t know what you’re talking about.
The only reason why I’m replying at all is that this is a site dedicated to cognitive biases, and maybe you will cite an interesting post here about how I might be horribly wrong about what I care about. Of course I could be horribly wrong about my intermediate values, but the calculation is not coming out that way.
I think an issue here may be that your statement of caring or not caring about something doesn’t carry much weight when you not only are personally unfamiliar with X, but so is everyone else who ever lived.
You can truthfully state that you aren’t terribly attracted by what you imagine living a second life in Futurama is like; but your mental picture of it is likely to bear very very very little resemblance to what a potential second life will actually feel like. Since cryonics offers a small chance of giving you an actual future life, you should evaluate on the basis of that and therefore pay little attention to what your hopelessly flawed imagination suggests.
It’s like stating that you care/don’t care for a particular hallucinogen without having ever tried it, or any substance similar to it before, or having ever read reports by people who actually tried it. You don’t have a sufficient basis to make your model of it, about which you state a claim of caring/not caring, at all meaningful.
Cryonics costs time and money. The burden of proof is there. If I’m completely ignorant of what it would be like, then I will not spend any effort to bring it about. Byrnema said it well; let the resources be expended on a baby rather than on me.
I would kind of like to try LSD, because I know something about that, and what I’ve heard is mostly positive, when following certain guidelines. A random unknown hallucinogen would pose a danger of long-term health effects. So let hallucinogen X be one whose safety is guaranteed but whose other effects I’m completely ignorant of. Then I don’t care that I’ve never tried hallucinogen X. I am not going to spend time and money seeking it out.
See, for example, this post (although its connection to our discussion is rather indirect).
This seems wrong to me. That Toby should X does not imply that Toby does X, so determining what Toby should want does not settle whether Toby in fact wants it.
Toby does not seem to be making that claim, though perhaps implicitly so. (Much like it could be argued that “X” implies “I believe that X”, it could be argued that “I did X” implies “I should have done X”. But that fails on common usage, where “I did X but I should not have done X” is ordinary.)