(This comment is written in the ChatGPT style because I’ve spent so much time talking to language models.)
Calculating the probabilities
The calculation of the probabilities consists of the following steps:
The epistemic split
Either we guessed the correct digit of π (10%) (branch 1), or we didn’t (90%) (branch 2).
The computational split
On branch 1, all of your measure survives (branch 1−1) and none dies (branch 1−2), on branch 2, 1128 survives (branch 2−1) and 127128 dies (branch 2−2).
Putting it all together
Conditional on us subjectively surviving (which QI guarantees), the probability we guessed the digit of π correctly is
P=10%×100%10%×100%+90%×1128×100%≈93.4%
The probability of us having guessed the digit of π prior to us surviving is, of course, just 10%.
Verifying them empirically
For the probabilities to be meaningful, they need to be verifiable empirically in some way.
Let’s first verify that prior to us surviving, the probability of us guessing the digit correctly is 10%. We’ll run n experiments by guessing a digit each time and instantly verifying it. We’ll learn that we’re successful in, indeed, just 10% of the time.
Let’s now verify that conditional on us surviving, we’ll have ≈93.4% probability of guessing correctly. We perform the experiment n times again, and this time, every time we survive, other people will check if the guess was correct. They will observe that we guess correctly, indeed, ≈93.4% of the time.
Conclusion
We arrived at the conclusion that the probability jumps at the moment of our awakening. That might sound incredibly counterintuitive, but since it’s verifiable empirically, we have no choice but to accept it.
Thanks. By the way, the “chatification” of the mind is a real problem. It’s an example of reverse alignment: humans are more alignable than AI (we are gullible), so during interactions with AI, human goals will drift more quickly than AI goals. In the end, we get perfect alignment: humans will want paperclips.
(This comment is written in the ChatGPT style because I’ve spent so much time talking to language models.)
Calculating the probabilities
The calculation of the probabilities consists of the following steps:
The epistemic split
Either we guessed the correct digit of π (10%) (branch 1), or we didn’t (90%) (branch 2).
The computational split
On branch 1, all of your measure survives (branch 1−1) and none dies (branch 1−2), on branch 2, 1128 survives (branch 2−1) and 127128 dies (branch 2−2).
Putting it all together
Conditional on us subjectively surviving (which QI guarantees), the probability we guessed the digit of π correctly is
P=10%×100%10%×100%+90%×1128×100%≈93.4%
The probability of us having guessed the digit of π prior to us surviving is, of course, just 10%.
Verifying them empirically
For the probabilities to be meaningful, they need to be verifiable empirically in some way.
Let’s first verify that prior to us surviving, the probability of us guessing the digit correctly is 10%. We’ll run n experiments by guessing a digit each time and instantly verifying it. We’ll learn that we’re successful in, indeed, just 10% of the time.
Let’s now verify that conditional on us surviving, we’ll have ≈93.4% probability of guessing correctly. We perform the experiment n times again, and this time, every time we survive, other people will check if the guess was correct. They will observe that we guess correctly, indeed, ≈93.4% of the time.
Conclusion
We arrived at the conclusion that the probability jumps at the moment of our awakening. That might sound incredibly counterintuitive, but since it’s verifiable empirically, we have no choice but to accept it.
Thanks. By the way, the “chatification” of the mind is a real problem. It’s an example of reverse alignment: humans are more alignable than AI (we are gullible), so during interactions with AI, human goals will drift more quickly than AI goals. In the end, we get perfect alignment: humans will want paperclips.