To make sure I understand this correctly, is one example of this essentially a Pascal’s Mugging collection box—The box has a note on it with a Pascal’s Mugging on those who read it, but it’s an inanimate box, so you can’t actually interact with it or ask any clarifying questions?
I interpreted this to mean a situation where the mugging isn’t being offered by an agent, but instead is simply a fact about the universe. For example, if you’re the first person to think of cryonics*, then precommitments don’t matter. Either the universe is such that cryonics will work, or it is not, and game theory doesn’t enter into it.
*Assume for the sake of the example that cryonics has an infinitesimal chance of working and that the payoff of revival is nigh-uncountably huge. (I believe neither of these things.)
Thank you for the clarification. I’m still confused about something, and to explain where I was getting stuck, I think it may have been the deciders prediction of the Muggers/expected fact of the universes response to the question “Can you show me more evidence that you are a Matrix Lord(or a fact of the universe set up with comparable probability and utility.)”
For instance, if the Mugger might say:
A: “Sure, let me open a firey portal in the sky.”
B: “Let me call the next several coinflips you toss.”
C: “No, you’ll just have to judge based on the current evidence.”
D: “You were correct to question me, this was actually a scam.”
E: “You question me? The offer is now invalidated and/or I have killed those people.”
F: “You can’t investigate this right now because your evidence gathering abilities are too low, but you could use these techniques to increase your maximum evidence gathering abilities. With sufficient repeated application, you would be able to investigate the original problem.”
On the other hand, a fact of the universe may be such that:
A: Further investigation leads to sudden dramatic shifts such as firey portals.
B: Further investigation leads to more evidence that it’s right, but nothing dramatic.
C: Further investigation leads nowhere new. You’ll have to decide on current evidence.
D: Further investigation shows worrying about this was a waste of time.
E: Further investigation caused you to lose the opportunity: it was time sensitive.
F: Further investigation leads you to better investigative techniques, but you still can’t actually investigate the original problem. Perhaps you should try again?
And I was thinking “If it’s impersonal and simple, such as the box, maybe you may be stuck with C. But Foolish, Sadistic, or Testing Lords may give you anywhere from A-F.” (A testing lord in particular seems likely to give you scenario F.)
However, from your, Stuart_Armstrong and ArisKatsaris’s replies, this is not actually the area that is currently of concern, but I’m still somewhat confused about which position A-F I should be taking, whether it is just irrelevant to the problem and all would be handled the same, or whether some/each represents an entirely separate scenario which should be handled on it’s own.
No, it really just doesn’t have to be a statement that someone else provides at all. From the perspective of a pure Bayesian agent, Bob telling you “I’m a Matrix Lord” is merely evidence that works to update (not necessily in a positive direction) the probability of the pre-existing hypothesis “Bob is a Matrix Lord”.
And Bob telling you “If you built a temple to worship this rock, 3^^^3 lives will find happiness” is merely Bayesian evidence to update the probability of the prexisting hypothesis “If I built a temple to worship this rock, 3^^^3 lives will find happiness”—a hypothesis that a mind can construct all by itself, it doesn’t need another mind to construct it for itself.
The problem is the probability you assign on the hypothesis, not that someone else provided you the hypothesis. Such explicit statements made by others are barely significant at all. As evidence they’re probably near worthless. If I wanted to find potential Matrix Lords, I’d probably have better luck focusing on the people who fart the least or have had the fewest cases of diarrhea, rather than the people who say “I’m a Matrix Lord.” :-)
Another example would be a new theory of physics, maybe one that would allow the creation of/access to parallel worlds, and where you had the opportunity to contribute towards the development of said theory.
To make sure I understand this correctly, is one example of this essentially a Pascal’s Mugging collection box—The box has a note on it with a Pascal’s Mugging on those who read it, but it’s an inanimate box, so you can’t actually interact with it or ask any clarifying questions?
I interpreted this to mean a situation where the mugging isn’t being offered by an agent, but instead is simply a fact about the universe. For example, if you’re the first person to think of cryonics*, then precommitments don’t matter. Either the universe is such that cryonics will work, or it is not, and game theory doesn’t enter into it.
*Assume for the sake of the example that cryonics has an infinitesimal chance of working and that the payoff of revival is nigh-uncountably huge. (I believe neither of these things.)
Thank you for the clarification. I’m still confused about something, and to explain where I was getting stuck, I think it may have been the deciders prediction of the Muggers/expected fact of the universes response to the question “Can you show me more evidence that you are a Matrix Lord(or a fact of the universe set up with comparable probability and utility.)”
For instance, if the Mugger might say:
A: “Sure, let me open a firey portal in the sky.”
B: “Let me call the next several coinflips you toss.”
C: “No, you’ll just have to judge based on the current evidence.”
D: “You were correct to question me, this was actually a scam.”
E: “You question me? The offer is now invalidated and/or I have killed those people.”
F: “You can’t investigate this right now because your evidence gathering abilities are too low, but you could use these techniques to increase your maximum evidence gathering abilities. With sufficient repeated application, you would be able to investigate the original problem.”
On the other hand, a fact of the universe may be such that:
A: Further investigation leads to sudden dramatic shifts such as firey portals.
B: Further investigation leads to more evidence that it’s right, but nothing dramatic.
C: Further investigation leads nowhere new. You’ll have to decide on current evidence.
D: Further investigation shows worrying about this was a waste of time.
E: Further investigation caused you to lose the opportunity: it was time sensitive.
F: Further investigation leads you to better investigative techniques, but you still can’t actually investigate the original problem. Perhaps you should try again?
And I was thinking “If it’s impersonal and simple, such as the box, maybe you may be stuck with C. But Foolish, Sadistic, or Testing Lords may give you anywhere from A-F.” (A testing lord in particular seems likely to give you scenario F.)
However, from your, Stuart_Armstrong and ArisKatsaris’s replies, this is not actually the area that is currently of concern, but I’m still somewhat confused about which position A-F I should be taking, whether it is just irrelevant to the problem and all would be handled the same, or whether some/each represents an entirely separate scenario which should be handled on it’s own.
Maybe material for a further post...
No, it really just doesn’t have to be a statement that someone else provides at all. From the perspective of a pure Bayesian agent, Bob telling you “I’m a Matrix Lord” is merely evidence that works to update (not necessily in a positive direction) the probability of the pre-existing hypothesis “Bob is a Matrix Lord”.
And Bob telling you “If you built a temple to worship this rock, 3^^^3 lives will find happiness” is merely Bayesian evidence to update the probability of the prexisting hypothesis “If I built a temple to worship this rock, 3^^^3 lives will find happiness”—a hypothesis that a mind can construct all by itself, it doesn’t need another mind to construct it for itself.
The problem is the probability you assign on the hypothesis, not that someone else provided you the hypothesis. Such explicit statements made by others are barely significant at all. As evidence they’re probably near worthless. If I wanted to find potential Matrix Lords, I’d probably have better luck focusing on the people who fart the least or have had the fewest cases of diarrhea, rather than the people who say “I’m a Matrix Lord.” :-)
Another example would be a new theory of physics, maybe one that would allow the creation of/access to parallel worlds, and where you had the opportunity to contribute towards the development of said theory.