There are a number of ways to deal with this, however, the succinct realization is that Pascal’s mugging isn’t something you do to yourself. Another player is telling you the expected reward, and has made it arbitrarily large.
Therefore this assessment of future reward is potentially hostile misinformation. It’s been manipulated. For better or for worse, the way contemporary institutions typically deal with this problem is to simply assume any untrustworthy information is exactly zero probability. None at all. The issue with this comes up that contemporary institutions use “has a degree in the field/peer acclaim” as a way to identify who might have something trustworthy to say, and weight “has analyzed the raw data with correct math but is some random joe” as falling in that zero case.
This is where we end up with all kinds of failures and one of the many problems we need a form of AI to solve.
But yes you have hit on a way to filter potentially untrustworthy information without just throwing it out. In essence, you currently have a belief and confidence. Someone has some potentially untrustworthy information that differs from your belief. Your confidence in that data should decrease faster than the difference between the information and your present belief.
There are a number of ways to deal with this, however, the succinct realization is that Pascal’s mugging isn’t something you do to yourself. Another player is telling you the expected reward, and has made it arbitrarily large.
Therefore this assessment of future reward is potentially hostile misinformation. It’s been manipulated. For better or for worse, the way contemporary institutions typically deal with this problem is to simply assume any untrustworthy information is exactly zero probability. None at all. The issue with this comes up that contemporary institutions use “has a degree in the field/peer acclaim” as a way to identify who might have something trustworthy to say, and weight “has analyzed the raw data with correct math but is some random joe” as falling in that zero case.
This is where we end up with all kinds of failures and one of the many problems we need a form of AI to solve.
But yes you have hit on a way to filter potentially untrustworthy information without just throwing it out. In essence, you currently have a belief and confidence. Someone has some potentially untrustworthy information that differs from your belief. Your confidence in that data should decrease faster than the difference between the information and your present belief.