Malevolent agents have a preference for harming you. Malevolent agents probably have some form of intelligence, so that they can get better at harming you.
In practice, unless it’s in the case of an actual war though, they usually don’t. Even if they’re not responded to with swift action, gangs and murderers and so on generally will generally not evolve into supergangs and mass murderers.
The fact that malevolent entities can take countermeasures against being thwarted though, will tend to decrease the marginal utility of an investment in trying to stop them. Say that you try to keep weapons out of the hands of criminals, but they change means of getting their hands on weapons and only become slightly less well armed on average. If you were faced by another, nonsentient threat, which caused as much harm on average, but wouldn’t take countermeasures against your attempts to resist it, you’d be likely to get much better results by trying to address that problem.
Of course, sometimes other thinking agents do pose a higher priority threat, and the fact that they respond to signalling and game theory incentives can tip the scales in favor of addressing them over other threats, but that doesn’t mean that we evaluate those factors in anything close to a rational manner.
In practice, unless it’s in the case of an actual war though, they usually don’t. Even if they’re not responded to with swift action, gangs and murderers and so on generally will generally not evolve into supergangs and mass murderers.
The fact that malevolent entities can take countermeasures against being thwarted though, will tend to decrease the marginal utility of an investment in trying to stop them. Say that you try to keep weapons out of the hands of criminals, but they change means of getting their hands on weapons and only become slightly less well armed on average. If you were faced by another, nonsentient threat, which caused as much harm on average, but wouldn’t take countermeasures against your attempts to resist it, you’d be likely to get much better results by trying to address that problem.
Of course, sometimes other thinking agents do pose a higher priority threat, and the fact that they respond to signalling and game theory incentives can tip the scales in favor of addressing them over other threats, but that doesn’t mean that we evaluate those factors in anything close to a rational manner.