I get the feeling that I missed a lot of prediscussion to this topic. I am new here and new to these types of discussions, so if I am way off target please nudge me in the right direction. :)
If the statistics of winning a lottery are almost none, they are not none. As such, the chances of a lottery winner existing as time goes on increases with each lottery ticket purchased. (The assumption here is that “winner” simply means “holding the right ticket”.)
Furthermore, it seems like the concept of the QTI is only useful if you already know what the probability of it being true /and/ find it helpful to consider yourself in the other variations as an extension of your personal identity. Otherwise, you are just killing yourself to prove a point to someone else.
But I really do not understand this:
“If the hypothesis ‘this world is a holodeck’ is normatively assigned a calibrated confidence well above 10-8, the lottery winner now has incommunicable good reason to believe they are in a holodeck.”
Why are the probabilities of the world being a holodeck tied to the probability of guessing a number correctly? It seems like this is the same reasoning that leads people to believing in Jesus just because his face showed up on their potato chip. It just sounds like a teleological argument with a different target. Or was that the point and I missed it?
PS) Is it better to post once with three topics, or three times with one topic each?
The person controlling the holodeck (who presumably designed the simulation) needs to know the probability. But the person being simulated, who experiences winning the lottery, does not need to know anything about the inner working of his (simulated) world. For the experience to seem real enough, it’d be best, even, not to know every detail of what’s going on.
I mean that if we’re to know the evidential weight of winning the lottery to the theory that we’re on the holodeck, we need to know P(L|H), so that we can calculate P(H|L) = P(L|H)P(H)/(P(L|H)P(H) + P(L|¬H)P(¬H)).
I get your point now. But all we need to know is whether P(L|H) > P(L|~H)*.
If this is the case, then if an extremely unlikely (P(L/~H) → 0) event L happens to you, this evidently increases the chance that you’re in a holodeck simulation. In the formula, P(L|H) equates to (almost) 1 as P(L|~H) approaches zero. The unlikelier the event (amazons on unicorns descending from the heavens to take you to the land of bread and honey), i.e. the larger the difference between P(L|H) and P(L|~H), the larger the probability that you’re experiencing a simulation.
This is true as long as P(L|H) > P(L|~H). If L is a mundane event, P(L|H) = P(L|~H) and the formula reduces to P(H|L) = P(H). If L is so supremely banal that P(L|~H) > p(L|H), the occurence of L actually decreases the chance that you’re experiencing a holodeck simulation.
Indeed, I believe that was the point of the original post.
The core assumption remains, of course, that you’re more likely to win the lottery when you’re experiencing a holodeck simulation than in the real world (P(L|H) > P(L|~H)).
I’m not well-versed in Bayesian reasoning, so correct me if I’m wrong. Your posts have definitely helped to clarify my thoughts.
*I don’t know how to type the “not”-sign, so I’ll use a tilde.
I get the feeling that I missed a lot of prediscussion to this topic. I am new here and new to these types of discussions, so if I am way off target please nudge me in the right direction. :)
If the statistics of winning a lottery are almost none, they are not none. As such, the chances of a lottery winner existing as time goes on increases with each lottery ticket purchased. (The assumption here is that “winner” simply means “holding the right ticket”.)
Furthermore, it seems like the concept of the QTI is only useful if you already know what the probability of it being true /and/ find it helpful to consider yourself in the other variations as an extension of your personal identity. Otherwise, you are just killing yourself to prove a point to someone else.
But I really do not understand this:
“If the hypothesis ‘this world is a holodeck’ is normatively assigned a calibrated confidence well above 10-8, the lottery winner now has incommunicable good reason to believe they are in a holodeck.”
Why are the probabilities of the world being a holodeck tied to the probability of guessing a number correctly? It seems like this is the same reasoning that leads people to believing in Jesus just because his face showed up on their potato chip. It just sounds like a teleological argument with a different target. Or was that the point and I missed it?
PS) Is it better to post once with three topics, or three times with one topic each?
I interpreted the last statement as follows:
IF you assign a probability higher than 10^(-8) to the hypothesis that you are in a holodeck
AND you win the lottery (which had a probabiltiy of 10^(-8) or thereabouts)
THEN you have good reason to believe you’re in a holodeck, because you’ve had such improbable good fortune.
Correct me if I’m wrong on this.
Strictly speaking you need to know the probability that you’ll win the lottery given that you’re on the holodeck to complete the calculation.
The person controlling the holodeck (who presumably designed the simulation) needs to know the probability. But the person being simulated, who experiences winning the lottery, does not need to know anything about the inner working of his (simulated) world. For the experience to seem real enough, it’d be best, even, not to know every detail of what’s going on.
I mean that if we’re to know the evidential weight of winning the lottery to the theory that we’re on the holodeck, we need to know P(L|H), so that we can calculate P(H|L) = P(L|H)P(H)/(P(L|H)P(H) + P(L|¬H)P(¬H)).
I get your point now. But all we need to know is whether P(L|H) > P(L|~H)*.
If this is the case, then if an extremely unlikely (P(L/~H) → 0) event L happens to you, this evidently increases the chance that you’re in a holodeck simulation. In the formula, P(L|H) equates to (almost) 1 as P(L|~H) approaches zero. The unlikelier the event (amazons on unicorns descending from the heavens to take you to the land of bread and honey), i.e. the larger the difference between P(L|H) and P(L|~H), the larger the probability that you’re experiencing a simulation.
This is true as long as P(L|H) > P(L|~H). If L is a mundane event, P(L|H) = P(L|~H) and the formula reduces to P(H|L) = P(H). If L is so supremely banal that P(L|~H) > p(L|H), the occurence of L actually decreases the chance that you’re experiencing a holodeck simulation.
Indeed, I believe that was the point of the original post.
The core assumption remains, of course, that you’re more likely to win the lottery when you’re experiencing a holodeck simulation than in the real world (P(L|H) > P(L|~H)).
I’m not well-versed in Bayesian reasoning, so correct me if I’m wrong. Your posts have definitely helped to clarify my thoughts.
*I don’t know how to type the “not”-sign, so I’ll use a tilde.