A valid complaint. I know the answer must be something like “coherent utility functions can only consist of preferences about reality” because if you are motivated by unreal rewards you’ll only ever get unreal rewards, but that argument needs to be convincing to the ghost too, whose got more confidence in her own reality. I know that e.g. in Bomb ghost-theory agents choose the bomb even if they think the predictor will simulate them a painful death, because they consider the small amount of money at much greater measure for their real selves to be worth it, but I’m not sure how they get to that position.
The problem arises because, for some reason, you’ve assumed the ghosts have qualia. Now, that might be a necessary assumption if you require us to be uncertain about our degree of ghostliness. Necessary or not, though, it seems both dubious and potentially fatal to the whole argument.
Actually, I don’t assume that, I’m totally ok with believing ghosts don’t have qualia. All I need is that they first-order believe they have qualia, because then I can’t take my own first-order belief I have qualia as proof I’m not a ghost. I can still be uncertain about my ghostliness because I’m uncertain in the accuracy of my own belief I have qualia, in explicit contradiction of ‘cogito ergo sum’. The only reason ghosts possibly having qualia is a problem is that then maybe I have to care about how they feel.
If you think you might not have qualia, then by definition you don’t have qualia. This just seems like a restatement of the idea that we should act as if we were choosing the output of a computation. On its face, this is at least as likely to be coherent as ‘What if the claim we have the most certainty of were false,’ because the whole point of counterfactuals in general is to screen off potential contradictions.
If you think you might not have qualia, then by definition you don’t have qualia.
What? just a tiny bit of doubt and your entire subjective conscious experience evaporates completely? I can’t see any mechanism that would do that, it seems like you can be real and have any set of beliefs or be fictional and have any set of beliefs. Something something map-territory distinction?
This just seems like a restatement of the idea that we should act as if we were choosing the output of a computation.
Yes, it is a variant of that idea, with different justifications that I think are more resilient. The ghosts of FDT agents still make the correct choices, they just have incoherent beliefs while they do it.
Again, it isn’t more resilient, and thinking you doubt a concept you call “qualia” doesn’t mean you can doubt your own qualia. Perhaps the more important point here is that you are typically more uncertain of mathematical statements, which is why you haven’t removed and cannot remove the need for logical counterfactuals.
Real humans have some degree of uncertainty about most mathematical theorems. There may be exceptions, like 0+1=1, or the halting problem and its application to God, but typically we have enough uncertainty when it comes to mathematics, that we might need to consider counterfactuals. Indeed, this seems to be required by the theorem alluded to at the above link—logical omniscience seems logically impossible.
For a concrete (though unimportant) example of how regular people might use such counterfactuals in everyday life, consider P=NP. That statement is likely false. Yet, we can ask meaningful-sounding questions about what its truth would mean, and even say that the episode of ‘Elementary’ which dealt with that question made unjustified leaps. “Even if someone did prove P=NP,” I find myself reasoning, “that wouldn’t automatically entail what they’re claiming.”
Tell me if I’ve misunderstood, but it sounds like you’re claiming we can’t do something which we plainly do all the time. That is unconvincing. It doesn’t get any more convincing when you add that maybe my experience of doing so isn’t real. I am very confident that you will convince zero average people by telling them that they might not actually be conscious. I’m skeptical that even a philosopher would swallow that.
I totally agree we can be coherently uncertain about logical facts, like whether P=NP. FDT has bigger problems then that.
When writing this I tried actually doing the thing where you predict a distribution, and only 21% of LessWrong users were persuaded they might be imaginary and being imagined by me, which is pretty low accuracy considering they were in fact imaginary and being imagined by me. Insisting that the experience of qualia can’t be doubted did come up a few times, but not as aggressively as you’re pushing it here. I tried to cover it in the “highly detailed internal subjective experience” counterargument, and in my introduction, but I could have been stronger on that.
I agree that the same argument on philosophers or average people would be much less successful even then that, but that’s a fact about them, not about the theory.
Does it. The post you linked does nothing to support that claim, and I don’t think you’ve presented any actual problem which definitively wouldn’t be solved by logical counterfactuals. (Would this problem also apply to real people killing terrorists, instead of giving in to their demands? Because zero percent of the people obeying FDT in that regard are doing so because they think they might not be real.) This post is actually about TDT, but it’s unclear to me why the ideas couldn’t be transferred.
I also note that 100% of responses in this thread, so far, appear to assume that your ghosts would need to have qualia in order for the argument to make sense. I think your predictions were bad. I think you should stop doing that, and concentrate on the object-level ideas.
A valid complaint. I know the answer must be something like “coherent utility functions can only consist of preferences about reality” because if you are motivated by unreal rewards you’ll only ever get unreal rewards, but that argument needs to be convincing to the ghost too, whose got more confidence in her own reality. I know that e.g. in Bomb ghost-theory agents choose the bomb even if they think the predictor will simulate them a painful death, because they consider the small amount of money at much greater measure for their real selves to be worth it, but I’m not sure how they get to that position.
The problem arises because, for some reason, you’ve assumed the ghosts have qualia. Now, that might be a necessary assumption if you require us to be uncertain about our degree of ghostliness. Necessary or not, though, it seems both dubious and potentially fatal to the whole argument.
Actually, I don’t assume that, I’m totally ok with believing ghosts don’t have qualia. All I need is that they first-order believe they have qualia, because then I can’t take my own first-order belief I have qualia as proof I’m not a ghost. I can still be uncertain about my ghostliness because I’m uncertain in the accuracy of my own belief I have qualia, in explicit contradiction of ‘cogito ergo sum’. The only reason ghosts possibly having qualia is a problem is that then maybe I have to care about how they feel.
If you think you might not have qualia, then by definition you don’t have qualia. This just seems like a restatement of the idea that we should act as if we were choosing the output of a computation. On its face, this is at least as likely to be coherent as ‘What if the claim we have the most certainty of were false,’ because the whole point of counterfactuals in general is to screen off potential contradictions.
What? just a tiny bit of doubt and your entire subjective conscious experience evaporates completely? I can’t see any mechanism that would do that, it seems like you can be real and have any set of beliefs or be fictional and have any set of beliefs. Something something map-territory distinction?
Yes, it is a variant of that idea, with different justifications that I think are more resilient. The ghosts of FDT agents still make the correct choices, they just have incoherent beliefs while they do it.
Again, it isn’t more resilient, and thinking you doubt a concept you call “qualia” doesn’t mean you can doubt your own qualia. Perhaps the more important point here is that you are typically more uncertain of mathematical statements, which is why you haven’t removed and cannot remove the need for logical counterfactuals.
Real humans have some degree of uncertainty about most mathematical theorems. There may be exceptions, like 0+1=1, or the halting problem and its application to God, but typically we have enough uncertainty when it comes to mathematics, that we might need to consider counterfactuals. Indeed, this seems to be required by the theorem alluded to at the above link—logical omniscience seems logically impossible.
For a concrete (though unimportant) example of how regular people might use such counterfactuals in everyday life, consider P=NP. That statement is likely false. Yet, we can ask meaningful-sounding questions about what its truth would mean, and even say that the episode of ‘Elementary’ which dealt with that question made unjustified leaps. “Even if someone did prove P=NP,” I find myself reasoning, “that wouldn’t automatically entail what they’re claiming.”
Tell me if I’ve misunderstood, but it sounds like you’re claiming we can’t do something which we plainly do all the time. That is unconvincing. It doesn’t get any more convincing when you add that maybe my experience of doing so isn’t real. I am very confident that you will convince zero average people by telling them that they might not actually be conscious. I’m skeptical that even a philosopher would swallow that.
I totally agree we can be coherently uncertain about logical facts, like whether P=NP. FDT has bigger problems then that.
When writing this I tried actually doing the thing where you predict a distribution, and only 21% of LessWrong users were persuaded they might be imaginary and being imagined by me, which is pretty low accuracy considering they were in fact imaginary and being imagined by me. Insisting that the experience of qualia can’t be doubted did come up a few times, but not as aggressively as you’re pushing it here. I tried to cover it in the “highly detailed internal subjective experience” counterargument, and in my introduction, but I could have been stronger on that.
I agree that the same argument on philosophers or average people would be much less successful even then that, but that’s a fact about them, not about the theory.
>FDT has bigger problems then that.
Does it. The post you linked does nothing to support that claim, and I don’t think you’ve presented any actual problem which definitively wouldn’t be solved by logical counterfactuals. (Would this problem also apply to real people killing terrorists, instead of giving in to their demands? Because zero percent of the people obeying FDT in that regard are doing so because they think they might not be real.) This post is actually about TDT, but it’s unclear to me why the ideas couldn’t be transferred.
I also note that 100% of responses in this thread, so far, appear to assume that your ghosts would need to have qualia in order for the argument to make sense. I think your predictions were bad. I think you should stop doing that, and concentrate on the object-level ideas.