The information security term is “limiting your attack surface”. In circumstances where you expect other bots to be friendly, you might be more open to unusual or strange inputs and compacts that are harder to check for exploits but seem net positive on the surface. In circumstances where you expect bots to be less friendly, you might limit your dealings to very simple, popular, and transparently safe interactions, and reject some potential deals that appear net-positive but harder to verify. In picking a stance you have to make a tradeoff between being able to capitalize on actually good but ~novel/hard-to-model/dangerous trades and interactions, vs. being open to exploits, and the human brain has a simple (though obviously not perfect) model for assessing the circumstances to see which stance is appropriate.
I think part of why we are so resistant to accept the validity of Pascal’s muggings, is that people see it as inappropriate to be so open to such a novel trade with complete strangers, cultists, or ideologues (labeled the ‘mugger’) who might not have our best interests in mind. But this doesn’t have anything to do with low probability extremely negative events being “ignorable”. If you change the scenario so that the ‘mugger’ is instead just a force of nature, unlikely to have landed on a glitch for your risk assessment cognition by chance, then it becomes a lot more ambiguous what you should actually do. Other people here seem to take the lesson of Pascal’s mugging as a reason against hedging against large negatives in general to their own peril, which doesn’t seem correct to me.
The information security term is “limiting your attack surface”. In circumstances where you expect other bots to be friendly, you might be more open to unusual or strange inputs and compacts that are harder to check for exploits but seem net positive on the surface. In circumstances where you expect bots to be less friendly, you might limit your dealings to very simple, popular, and transparently safe interactions, and reject some potential deals that appear net-positive but harder to verify. In picking a stance you have to make a tradeoff between being able to capitalize on actually good but ~novel/hard-to-model/dangerous trades and interactions, vs. being open to exploits, and the human brain has a simple (though obviously not perfect) model for assessing the circumstances to see which stance is appropriate.
I think part of why we are so resistant to accept the validity of Pascal’s muggings, is that people see it as inappropriate to be so open to such a novel trade with complete strangers, cultists, or ideologues (labeled the ‘mugger’) who might not have our best interests in mind. But this doesn’t have anything to do with low probability extremely negative events being “ignorable”. If you change the scenario so that the ‘mugger’ is instead just a force of nature, unlikely to have landed on a glitch for your risk assessment cognition by chance, then it becomes a lot more ambiguous what you should actually do. Other people here seem to take the lesson of Pascal’s mugging as a reason against hedging against large negatives in general to their own peril, which doesn’t seem correct to me.