As others have pointed out, malevolent agents can be signaled by revenge.
Malevolent agents have a preference for harming you. Malevolent agents probably have some form of intelligence, so that they can get better at harming you.
If you’re doing a real calculation, it’s marginal future harm reduction minus response cost with some time discount function. Obviously, there’s no guarantee that you should choose to respond to the malevolent agent threat over the uncaring universe threat. The factors indicated are all of the “all other things being equal” sort.
I’ll give you factors in favor of fighting the uncaring universe—those threats won’t be signaled away, and likely have more universal application in time and space. Fighting malevolent agents takes care of this agent today. There will be more tomorrow. Overcoming the inconveniences of gravity pays dividends forever. Hail Science!
The thought occurred to me while watching Sherlock (as kindly recommended by others here). If Sherlock and Moriarty are so “bored” with the challenges presented by their simian neighbors, why don’t they fight Death or engage in some other science project to make themselves useful? If they’re such smarty boys, why don’t they take on the Universe instead of slightly evolved primates?
Malevolent agents have a preference for harming you. Malevolent agents probably have some form of intelligence, so that they can get better at harming you.
In practice, unless it’s in the case of an actual war though, they usually don’t. Even if they’re not responded to with swift action, gangs and murderers and so on generally will generally not evolve into supergangs and mass murderers.
The fact that malevolent entities can take countermeasures against being thwarted though, will tend to decrease the marginal utility of an investment in trying to stop them. Say that you try to keep weapons out of the hands of criminals, but they change means of getting their hands on weapons and only become slightly less well armed on average. If you were faced by another, nonsentient threat, which caused as much harm on average, but wouldn’t take countermeasures against your attempts to resist it, you’d be likely to get much better results by trying to address that problem.
Of course, sometimes other thinking agents do pose a higher priority threat, and the fact that they respond to signalling and game theory incentives can tip the scales in favor of addressing them over other threats, but that doesn’t mean that we evaluate those factors in anything close to a rational manner.
As others have pointed out, malevolent agents can be signaled by revenge.
Malevolent agents have a preference for harming you. Malevolent agents probably have some form of intelligence, so that they can get better at harming you.
If you’re doing a real calculation, it’s marginal future harm reduction minus response cost with some time discount function. Obviously, there’s no guarantee that you should choose to respond to the malevolent agent threat over the uncaring universe threat. The factors indicated are all of the “all other things being equal” sort.
I’ll give you factors in favor of fighting the uncaring universe—those threats won’t be signaled away, and likely have more universal application in time and space. Fighting malevolent agents takes care of this agent today. There will be more tomorrow. Overcoming the inconveniences of gravity pays dividends forever. Hail Science!
The thought occurred to me while watching Sherlock (as kindly recommended by others here). If Sherlock and Moriarty are so “bored” with the challenges presented by their simian neighbors, why don’t they fight Death or engage in some other science project to make themselves useful? If they’re such smarty boys, why don’t they take on the Universe instead of slightly evolved primates?
In practice, unless it’s in the case of an actual war though, they usually don’t. Even if they’re not responded to with swift action, gangs and murderers and so on generally will generally not evolve into supergangs and mass murderers.
The fact that malevolent entities can take countermeasures against being thwarted though, will tend to decrease the marginal utility of an investment in trying to stop them. Say that you try to keep weapons out of the hands of criminals, but they change means of getting their hands on weapons and only become slightly less well armed on average. If you were faced by another, nonsentient threat, which caused as much harm on average, but wouldn’t take countermeasures against your attempts to resist it, you’d be likely to get much better results by trying to address that problem.
Of course, sometimes other thinking agents do pose a higher priority threat, and the fact that they respond to signalling and game theory incentives can tip the scales in favor of addressing them over other threats, but that doesn’t mean that we evaluate those factors in anything close to a rational manner.