Yes. The error is that humans aren’t good at utilitarianism.
Why would that be an error? It’s not a requirement for an ethical theory that Homo sapiens must be good at it. If we notice that humans are bad at it, maybe we should make AI or posthumans that are better at it, if we truly view this as the best ethical theory. Besides, if the outcome of people following utilitarianism is really that bad, then utilitarianism would demand (it gets meta now) that people should follow some other theory that overall has better outcomes (see also Parfit’s Reasons and Persons). Another solution is Hare’s proposed “Two-Level Utilitarianism”.
From Wikipedia:
Hare proposed that on a day to day basis, one should think and act like a rule utilitarian and follow a set of intuitive prima facie rules, in order to avoid human error and bias influencing one’s decision-making, and thus avoiding the problems that affected act utilitarianism.
The error is that it’s humans who are attempting to implement the utilitarianism. I’m not talking about hypothetical non-human intelligences, and I don’t think they were implied in the context.
See also Ends Don’t Justify Means (Among Humans): having non-consequentialist rules (e.g. “Thou shalt not murder, even if it seems like a good idea”) can be consequentially desirable since we’re not capable of being ideal consequentialists.
Oh, indeed. But when you’ve repeatedly emphasised “shut up and multiply”, tacking “btw don’t do anything weird” on the end strikes me as susceptible to your readers not heeding it, particularly when they really need to.
I don’t think hypothetical superhuman would be dramatically different in their ability to employ predictive models upon uncertainty. If you increase power so it is to mankind as mankind is to 1 amoeba, you only double anything that is fundamentally logarithmic. While in many important cases there are faster approximations, it’s magical thinking to expect them everywhere; and there are problems where the errors inherently grow exponentially with time even if the model is magically perfect (butterfly effect). Plus, of course, models of other intelligences rapidly get unethical as you try to improve fidelity (if it is emulating people and putting them through torture and dust speck experience to compare values).
Why would that be an error? It’s not a requirement for an ethical theory that Homo sapiens must be good at it. If we notice that humans are bad at it, maybe we should make AI or posthumans that are better at it, if we truly view this as the best ethical theory. Besides, if the outcome of people following utilitarianism is really that bad, then utilitarianism would demand (it gets meta now) that people should follow some other theory that overall has better outcomes (see also Parfit’s Reasons and Persons). Another solution is Hare’s proposed “Two-Level Utilitarianism”. From Wikipedia:
The error is that it’s humans who are attempting to implement the utilitarianism. I’m not talking about hypothetical non-human intelligences, and I don’t think they were implied in the context.
See also Ends Don’t Justify Means (Among Humans): having non-consequentialist rules (e.g. “Thou shalt not murder, even if it seems like a good idea”) can be consequentially desirable since we’re not capable of being ideal consequentialists.
Oh, indeed. But when you’ve repeatedly emphasised “shut up and multiply”, tacking “btw don’t do anything weird” on the end strikes me as susceptible to your readers not heeding it, particularly when they really need to.
I don’t think hypothetical superhuman would be dramatically different in their ability to employ predictive models upon uncertainty. If you increase power so it is to mankind as mankind is to 1 amoeba, you only double anything that is fundamentally logarithmic. While in many important cases there are faster approximations, it’s magical thinking to expect them everywhere; and there are problems where the errors inherently grow exponentially with time even if the model is magically perfect (butterfly effect). Plus, of course, models of other intelligences rapidly get unethical as you try to improve fidelity (if it is emulating people and putting them through torture and dust speck experience to compare values).