You won. Aren’t rationalists supposed to be doing that?
As far as you know, your probability estimate for “you will win the lottery” (in your mind) was wrong. It is another question how that updates the probability of “you would win the lottery if you played next week”, but whatever made you buy that ticket (even though the “rational” estimates voted against it… “trying random things”, whatever it was) should be applied more in the future.
Of course, the result is quite likely to be “learning lots of nonsense from a measurement error”, but you should definitely should update having seen that, and a decision you use for updates causing that decision to be made more in the future is definitely a right one.
If I won the lottery, I would definitely spend $5 for another ticket. And eventually you might realize that it’s just Omega having fun. (actually, isn’t one-boxing the same question?)
You won. Aren’t rationalists supposed to be doing that?
As far as you know, your probability estimate for “you will win the lottery” (in your mind) was wrong. It is another question how that updates the probability of “you would win the lottery if you played next week”, but whatever made you buy that ticket (even though the “rational” estimates voted against it… “trying random things”, whatever it was) should be applied more in the future.
Of course, the result is quite likely to be “learning lots of nonsense from a measurement error”, but you should definitely should update having seen that, and a decision you use for updates causing that decision to be made more in the future is definitely a right one.
If I won the lottery, I would definitely spend $5 for another ticket. And eventually you might realize that it’s just Omega having fun. (actually, isn’t one-boxing the same question?)