I’d say AI ruin only relies on consequentialism. What consequentialism means is that you have a utility function, and you’re trying to maximize the expected value of your utility function. There are theorems to the effect that if you don’t behave as though you are maximizing the expected value of some particular utility function, then you are being stupid in some way. Utilitarianism is a particular case of consequentialism where your utility function is equal to the average happiness of everyone in the world. “The greatest good for the greatest number.” Utilitarianism is not relevant to AI ruin because without solving alignment first, the AI is not going to care about “goodness”.
The von Neumann probes aren’t important to the AI ruin picture either: Humanity would be doomed, probes or no probes. The probes are just a grim reminder that screwing up AI won’t only kill all humans, it will also kill all the aliens unlucky enough to be living too close to us.
I’d say AI ruin only relies on consequentialism. What consequentialism means is that you have a utility function, and you’re trying to maximize the expected value of your utility function. There are theorems to the effect that if you don’t behave as though you are maximizing the expected value of some particular utility function, then you are being stupid in some way. Utilitarianism is a particular case of consequentialism where your utility function is equal to the average happiness of everyone in the world. “The greatest good for the greatest number.” Utilitarianism is not relevant to AI ruin because without solving alignment first, the AI is not going to care about “goodness”.
The von Neumann probes aren’t important to the AI ruin picture either: Humanity would be doomed, probes or no probes. The probes are just a grim reminder that screwing up AI won’t only kill all humans, it will also kill all the aliens unlucky enough to be living too close to us.