Looks like strategic thinking to me. If you are to organize yourself to be prone to be Pascal-mugged, you will get Pascal mugged, and thus it is irrational to organize yourself to be Pascal-muggable.
edit:
It is as rational to introduce certain bounds on applications of own reasoning as it is to try to build reliable, non-crashing software, or to impose simple rule of thumb limits on the output of the software that controls positioning of control rods in the nuclear reactor.
If you properly consider a tiny probability of mistake to your reasoning, a mistake that may lead to consideration of a number generated by a random string—a lot of such numbers are extremely huge—and apply some meta-cognition with regards to appearance of such numbers, you’ll find that such extremely huge numbers are also disproportionally represented in products of errors in reasoning.
With regards to the wager, there is my answer:
If you see someone bend over backwards to make a nickel, it is probably not Warren Buffett you’re seeing. Indeed the probability of that person who’s bending over backwards to make a nickel, having N$, would sharply fall off with increase of N. Here you see a being that is mugging you, and he allegedly has the power to simulate 3^^^^3 beings that he can mug, have sexual relations with, torture, what ever. The larger is the claim, the less probable it is that this is a honest situation.
It is however exceedingly difficult to formalize such answer or to arrive at it in a formal fashion. And for me, there could exist other wagers that are beyond my capability to reason correctly about.
For this reason as matter of policy I assume that I have an error per each inference step—the error that can result in consideration of an extremely huge number—and have an upper cut off on the numbers i’d use for considerations as an optimization strategy; if there is a huge number of this sort, more verification steps are needed. In particular, this has very high impact on morality on me. Any sort of situation where you are killing fewer people to save more people—those situations are extremely uncommon and difficult to conjecture—the appearance of such situation however can easily result from faulty reasoning.
Looks like strategic thinking to me. If you are to organize yourself to be prone to be Pascal-mugged, you will get Pascal mugged, and thus it is irrational to organize yourself to be Pascal-muggable.
edit: It is as rational to introduce certain bounds on applications of own reasoning as it is to try to build reliable, non-crashing software, or to impose simple rule of thumb limits on the output of the software that controls positioning of control rods in the nuclear reactor.
If you properly consider a tiny probability of mistake to your reasoning, a mistake that may lead to consideration of a number generated by a random string—a lot of such numbers are extremely huge—and apply some meta-cognition with regards to appearance of such numbers, you’ll find that such extremely huge numbers are also disproportionally represented in products of errors in reasoning.
With regards to the wager, there is my answer: If you see someone bend over backwards to make a nickel, it is probably not Warren Buffett you’re seeing. Indeed the probability of that person who’s bending over backwards to make a nickel, having N$, would sharply fall off with increase of N. Here you see a being that is mugging you, and he allegedly has the power to simulate 3^^^^3 beings that he can mug, have sexual relations with, torture, what ever. The larger is the claim, the less probable it is that this is a honest situation.
It is however exceedingly difficult to formalize such answer or to arrive at it in a formal fashion. And for me, there could exist other wagers that are beyond my capability to reason correctly about.
For this reason as matter of policy I assume that I have an error per each inference step—the error that can result in consideration of an extremely huge number—and have an upper cut off on the numbers i’d use for considerations as an optimization strategy; if there is a huge number of this sort, more verification steps are needed. In particular, this has very high impact on morality on me. Any sort of situation where you are killing fewer people to save more people—those situations are extremely uncommon and difficult to conjecture—the appearance of such situation however can easily result from faulty reasoning.
This.
The further a reasoning reaches, the more likely to be wrong.
Any step could be not accurate enough, or not account for unknown effects in unusual situations, or rely on things we have no mean of knowing.
Typical signs that it is drifting too much from reality :
Numbers way outside usual ranges.
Errors or imagination produce these easily, reality not.
Making one pivotal to the known world.
One is central to one’s map, not to reality.
Extremely small cause having catastrophic effect.
If so, then why has it not already happened ? Also: pandering to our taste for stories.
Vastly changing some portions seems just as valid.
The reasoning is rooted in itself, not reality.
Pascal’s mugging lights them all, and it certainly reaches far.