Quite apart from the application of this argument to AI, the example of a gun shop/manufacturer is quite bad. One reason is that passing on the negative externalities of selling a gun without passing on the positive externalities* (which is never done in practice and would be very difficult to do) creates an assymetry that biases the cost of firearms to be higher than it would have been in rational circumstances.
(*) Positive externalities of manufacturing and selling a gun include a deterrent effect on crime (“I would rather not try to rob that store, the clerk might be armed”), direct prevention of crime by armed citizens (https://en.wikipedia.org/wiki/Defensive_gun_use) or very strong positive effects of population being armed in extreme (rare) scenarios such as foreign invasion or the government turning tyrannical. I would suspect you wouldn’t want to reward firearms manufacturers for all these positive outcomes (or at least it would be difficult, since these effects are very hard to quantify).
There are some a posteriori reasons though—there are numerous studies that reject a causal link between the number of firearms and homicides, for example. This indicates that firearm manufacturers do not cause additional deaths, and therefore it would be wrong to only internalize the negative costs.
That’s not true. It is not better, because providing appropriate incentives is very likely impossible in this case, e.g.:
- due to irrational political reasons (people have irrational fear of guns and will oppose any efforts to incentivize their purchase, while supporting efforts to disincentivize it);
- due to the fact that a reward system for preventing crime can be easily gamed (cobra effect), not to mention the fact that it will probably be very costly to follow up on all cases when crime was prevented;
- due to the fact that positive outcomes of gun ownership are inherently hard to quantify, hence in reality they will not be quantified and will not be taken into account (McNamara fallacy).