There are, in fact, three solutions, and two of them are fairly obvious ones.
1) We have observed 0 such things in existence. Ergo, when someone comes up to me and says that they are someone who will torture people I have no way of ever knowing existing unless I give them $5, I can simply assign them the probability of 0 that they are telling the truth. Seeing as the vast, vast majority of things I have observed 0 of do not exist, and we can construct an infinite number of things, assigning a probability of 0 to any particular thing I have never observed and have no evidence of is the only rational thing to do.
2) Even assuming they do have the power to do so, there is no guarantee that the person is being rational or telling the truth. They may torture those people regardless. They might torture them BECAUSE I gave them $5. They might do so at random. They might go up to the next person and say the next thing. It doesn’t matter. As such, their demand does not change the probability that those people will be tortured at all, because I have no reason to trust them, and their words have not changed the probabilities one way or the other. Ergo, again, you don’t give them money.
3) Given that I have no way of knowing whether those people exist, it just doesn’t matter. Anything which is unobservable does not matter at all, because, by its very nature, if it cannot be observed, then it cannot be changing the world around me. Because that is ultimately what matters, it doesn’t matter if they have the power or not, because i have no way of knowing and no way of determining the truth of the statement. Similar to the IPU, the fact that I cannot disprove it is not a rational reason to believe in it, and indeed the fact that it is non-falsifiable indicates that it doesn’t matter if it exists at all or not—the universe is identical either way.
It is inherently irrational to believe in things which are inherently non-falsifiable, because they have no means of influencing anything. In fact, that’s pretty core to what rationality is about.
The problem is with formalizing solutions, and making them consistent with other aspects that one would want an AI system to have (e.g. ability to update on the evidence). Your suggested three solutions don’t work in this respect because:
1) If we e.g. make an AI literally assign a probability 0 on scenarios that are too unlikely, then it wouldn’t be able to update on additional evidence based on the simple Bayesian formula. So an actual Matrix Lord wouldn’t be able to convince the AI he/she was a Matrix Lord even if he/she reversed gravity, or made it snow indoors, etc.
2) The assumption that a person’s words provides literally zero evidence one way or another seems again something you axiomatically assume rather than something that arises naturally. Is it really zero? Not just effectively zero where human discernment is concerned, but literally zero? Not even 0.000000000000000000000001% evidence towards either direction? That would seem highly coincidental. How do you ensure an AI would treat such words as zero evidence?
3) We would hopefully want the AI to care about things it can’t currently directly observe, or it wouldn’t care at all about the future (which it likewise can’t currently directly observe).
The issue isn’t helping human beings not fall prey to Pascal’s Mugging—they usually don’t. The issue is to figure out a way to program a solution, or (even better) to see that a solution arises naturally from other aspects of our system.
[quote]1) If we e.g. make an AI literally assign a probability 0 on scenarios that are too unlikely, then it wouldn’t be able to update on additional evidence based on the simple Bayesian formula. So an actual Matrix Lord wouldn’t be able to convince the AI he/she was a Matrix Lord even if he/she reversed gravity, or made it snow indoors, etc.[/quote]
Neither of those feats are even particularly impressive, though. Humans can make it snow indoors, and likewise an apparent reversal in gravity can be achieved via numerous routes, ranging from inverting the room to affecting one’s sense of balance to magnets.
Moreover, there are numerous more likely explanations for such feats. An AI, for instance, would have to worry about someone “hacking its eyes”, which would be a far simpler means of accomplishing that feat. Indeed, without other personnel around to give independent confirmation and careful testing, one should always assume that you are hallucinating, or that it is trickery. It is the rational thing to do.
You’re dealing with issues of false precision here. If something is so very unlikely, then it shouldn’t be counted in your calculations at all, because the likelihood is so low that it is negligible and most likely any “likelihood” you have guessed for it is exactly that—a guess. Unless you have strong empirical evidence, treading its probability as 0 is correct.
[quote]2) The assumption that a person’s words provides literally zero evidence one way or another seems again something you axiomatically assume rather than something that arises naturally. Is it really zero? Not just effectively zero where human discernment is concerned, but literally zero? Not even 0.000000000000000000000001% evidence towards either direction? That would seem highly coincidental. How do you ensure an AI would treat such words as zero evidence?[/quote]
Same way it thinks about everything else. If someone walks up to you on the street and claims souls exist, does that change the probability that souls exist? No, it doesn’t. If your AI can deal with that, then they can deal with this situation. If your AI can’t deal with someone saying that the Bible is true, then it has larger problems than pascal’s mugging.
[quote]3) We would hopefully want the AI to care about things it can’t currently directly observe, or it wouldn’t care at all about the future (which it likewise can’t currently directly observe).[/quote]
You seem to be confused here. What I am speaking of here is the greater sense of observability, what someone might call the Hubble Bubble. In other words, causality. Them torturing things that have no casual relationship with me—things outside of the realm that I can possibly ever effect, as well as outside the realm that can possibly ever effect me—it is irrelevant, and it may as well not happen because there is not only no way of knowing if it is happening, there is no possible way that it can matter to me. It cannot affect me, I cannot affect them. Its just the way things work. Its physics here.
Them threatening things outside the bounds of what can affect me doesn’t matter at all—I have no way of determining their truthfulness one way or the other, nor has it any way to impact me, so it doesn’t matter if they’re telling the truth or not.
[quote]The issue isn’t helping human beings not fall prey to Pascal’s Mugging—they usually don’t. The issue is to figure out a way to program a solution, or (even better) to see that a solution arises naturally from other aspects of our system.[/quote]
The above three things are all reasonable ways of dealing with the problem. Assigning it a probability of 0 is what humans do, after all, when it call comes down to it, and if you spend time thinking about it, 2 is obviously something you have to build into the system anyway—someone walking up to you and saying something doesn’t really change the likelihood of very unlikely things happening. And having it just not care about things outside of what is causally linked to it, ever, is another reasonable approach, though it still would leave it vulnerable to other things if it was very dumb. But I think any system which is reasonably intelligent would deal with it as some combination of 1 and 2 - not believing them, and not trusting them, which are really quite similar and related.
You’re being too verbose, which makes me personally find discussion with you rather tiring, and you’re not addressing the actual points I’m making. Let me try to ask some more specific questions
1) Below which point do you want us treating a prior probability as effectively 0, and should never be updated upwards no matter what evidence? E.g. one in a billion? One in a trillion? What’s the exact point, and can you justify it to me?
2) Why do you keep talking about things not being “causally linked”, since all of the examples of Pascal’s mugging given above do describe causal links? It’s not as if I said anything weird about acausal trade or some such, every example I gave describes normal causal links.
Assigning it a probability of 0 is what humans do, after all, when it call comes down to it,
Humans don’t tend to explicitly assign probabilities at all.
If someone walks up to you on the street and claims souls exist, does that change the probability that souls exist? No, it doesn’t.
Actually since people rarely bother to claim that things exist when they actually do (e.g. nobody is going around claiming “tables exist”, “the sun exists”) such people claiming that souls exist are probably minor evidence against their existence.
I wouldn’t debate with someone who assigns a “probability of 0” to anything (especially as in “actual 0″), other than to link to any introduction to Bayesian probability. But your time is of course your own :-), and I’m biting the lure too often, myself.
people claiming that souls exist are probably minor evidence against their existence
Well, it points to the belief as being one which constantly needs to be reaffirmed, so it at least hints at some controversy regarding the belief (alternative: It could be in-group identity affirming). Whether you regard that as evidence in favor (evolution) or against (resurrection) the belief depends on how cynical you are about human group beliefs in general.
You’re thinking about this too hard.
There are, in fact, three solutions, and two of them are fairly obvious ones.
1) We have observed 0 such things in existence. Ergo, when someone comes up to me and says that they are someone who will torture people I have no way of ever knowing existing unless I give them $5, I can simply assign them the probability of 0 that they are telling the truth. Seeing as the vast, vast majority of things I have observed 0 of do not exist, and we can construct an infinite number of things, assigning a probability of 0 to any particular thing I have never observed and have no evidence of is the only rational thing to do.
2) Even assuming they do have the power to do so, there is no guarantee that the person is being rational or telling the truth. They may torture those people regardless. They might torture them BECAUSE I gave them $5. They might do so at random. They might go up to the next person and say the next thing. It doesn’t matter. As such, their demand does not change the probability that those people will be tortured at all, because I have no reason to trust them, and their words have not changed the probabilities one way or the other. Ergo, again, you don’t give them money.
3) Given that I have no way of knowing whether those people exist, it just doesn’t matter. Anything which is unobservable does not matter at all, because, by its very nature, if it cannot be observed, then it cannot be changing the world around me. Because that is ultimately what matters, it doesn’t matter if they have the power or not, because i have no way of knowing and no way of determining the truth of the statement. Similar to the IPU, the fact that I cannot disprove it is not a rational reason to believe in it, and indeed the fact that it is non-falsifiable indicates that it doesn’t matter if it exists at all or not—the universe is identical either way.
It is inherently irrational to believe in things which are inherently non-falsifiable, because they have no means of influencing anything. In fact, that’s pretty core to what rationality is about.
The problem is with formalizing solutions, and making them consistent with other aspects that one would want an AI system to have (e.g. ability to update on the evidence). Your suggested three solutions don’t work in this respect because:
1) If we e.g. make an AI literally assign a probability 0 on scenarios that are too unlikely, then it wouldn’t be able to update on additional evidence based on the simple Bayesian formula. So an actual Matrix Lord wouldn’t be able to convince the AI he/she was a Matrix Lord even if he/she reversed gravity, or made it snow indoors, etc.
2) The assumption that a person’s words provides literally zero evidence one way or another seems again something you axiomatically assume rather than something that arises naturally. Is it really zero? Not just effectively zero where human discernment is concerned, but literally zero? Not even 0.000000000000000000000001% evidence towards either direction? That would seem highly coincidental. How do you ensure an AI would treat such words as zero evidence?
3) We would hopefully want the AI to care about things it can’t currently directly observe, or it wouldn’t care at all about the future (which it likewise can’t currently directly observe).
The issue isn’t helping human beings not fall prey to Pascal’s Mugging—they usually don’t. The issue is to figure out a way to program a solution, or (even better) to see that a solution arises naturally from other aspects of our system.
[quote]1) If we e.g. make an AI literally assign a probability 0 on scenarios that are too unlikely, then it wouldn’t be able to update on additional evidence based on the simple Bayesian formula. So an actual Matrix Lord wouldn’t be able to convince the AI he/she was a Matrix Lord even if he/she reversed gravity, or made it snow indoors, etc.[/quote]
Neither of those feats are even particularly impressive, though. Humans can make it snow indoors, and likewise an apparent reversal in gravity can be achieved via numerous routes, ranging from inverting the room to affecting one’s sense of balance to magnets.
Moreover, there are numerous more likely explanations for such feats. An AI, for instance, would have to worry about someone “hacking its eyes”, which would be a far simpler means of accomplishing that feat. Indeed, without other personnel around to give independent confirmation and careful testing, one should always assume that you are hallucinating, or that it is trickery. It is the rational thing to do.
You’re dealing with issues of false precision here. If something is so very unlikely, then it shouldn’t be counted in your calculations at all, because the likelihood is so low that it is negligible and most likely any “likelihood” you have guessed for it is exactly that—a guess. Unless you have strong empirical evidence, treading its probability as 0 is correct.
[quote]2) The assumption that a person’s words provides literally zero evidence one way or another seems again something you axiomatically assume rather than something that arises naturally. Is it really zero? Not just effectively zero where human discernment is concerned, but literally zero? Not even 0.000000000000000000000001% evidence towards either direction? That would seem highly coincidental. How do you ensure an AI would treat such words as zero evidence?[/quote]
Same way it thinks about everything else. If someone walks up to you on the street and claims souls exist, does that change the probability that souls exist? No, it doesn’t. If your AI can deal with that, then they can deal with this situation. If your AI can’t deal with someone saying that the Bible is true, then it has larger problems than pascal’s mugging.
[quote]3) We would hopefully want the AI to care about things it can’t currently directly observe, or it wouldn’t care at all about the future (which it likewise can’t currently directly observe).[/quote]
You seem to be confused here. What I am speaking of here is the greater sense of observability, what someone might call the Hubble Bubble. In other words, causality. Them torturing things that have no casual relationship with me—things outside of the realm that I can possibly ever effect, as well as outside the realm that can possibly ever effect me—it is irrelevant, and it may as well not happen because there is not only no way of knowing if it is happening, there is no possible way that it can matter to me. It cannot affect me, I cannot affect them. Its just the way things work. Its physics here.
Them threatening things outside the bounds of what can affect me doesn’t matter at all—I have no way of determining their truthfulness one way or the other, nor has it any way to impact me, so it doesn’t matter if they’re telling the truth or not.
[quote]The issue isn’t helping human beings not fall prey to Pascal’s Mugging—they usually don’t. The issue is to figure out a way to program a solution, or (even better) to see that a solution arises naturally from other aspects of our system.[/quote]
The above three things are all reasonable ways of dealing with the problem. Assigning it a probability of 0 is what humans do, after all, when it call comes down to it, and if you spend time thinking about it, 2 is obviously something you have to build into the system anyway—someone walking up to you and saying something doesn’t really change the likelihood of very unlikely things happening. And having it just not care about things outside of what is causally linked to it, ever, is another reasonable approach, though it still would leave it vulnerable to other things if it was very dumb. But I think any system which is reasonably intelligent would deal with it as some combination of 1 and 2 - not believing them, and not trusting them, which are really quite similar and related.
You’re being too verbose, which makes me personally find discussion with you rather tiring, and you’re not addressing the actual points I’m making. Let me try to ask some more specific questions
1) Below which point do you want us treating a prior probability as effectively 0, and should never be updated upwards no matter what evidence? E.g. one in a billion? One in a trillion? What’s the exact point, and can you justify it to me?
2) Why do you keep talking about things not being “causally linked”, since all of the examples of Pascal’s mugging given above do describe causal links? It’s not as if I said anything weird about acausal trade or some such, every example I gave describes normal causal links.
Humans don’t tend to explicitly assign probabilities at all.
Actually since people rarely bother to claim that things exist when they actually do (e.g. nobody is going around claiming “tables exist”, “the sun exists”) such people claiming that souls exist are probably minor evidence against their existence.
I wouldn’t debate with someone who assigns a “probability of 0” to anything (especially as in “actual 0″), other than to link to any introduction to Bayesian probability. But your time is of course your own :-), and I’m biting the lure too often, myself.
Well, it points to the belief as being one which constantly needs to be reaffirmed, so it at least hints at some controversy regarding the belief (alternative: It could be in-group identity affirming). Whether you regard that as evidence in favor (evolution) or against (resurrection) the belief depends on how cynical you are about human group beliefs in general.