Scientists and not philosophers should do meta-ethics and normative ethics, until
AGIs can do it better at which point we should leave it to them.
I don’t believe that scientists either have the inclination or the competence to do what you ask of them, and secondly that letting AGIs decide right and wrong would be a nightmare scenario for the human race.
Scientists and not philosophers should do meta-ethics and normative ethics, until
Normative ethics—yes, because I gravitate towards ethical naturalism myself (see my discussion of scale-free ethics here), which is a part of the “package” of scientism.
A scientist (as a role, not a person!) “shouldn’t” do meta-ethics (that is, decide that ethical naturalism is the way to go), because the question of meta-ethics, and acceptance or rejection of such fundamental philosophical stances as scientism, or idealism, or postmodernism is outside of the scope of science, that is, cannot be settled with methods of science. Ultimately, every scientist must do at least a little bit of philosophy (of science), at which moment they assume the role of a philosopher. Scientism is a philosophy that maximises the scope of science as much as possible and minimises the scope of philosophy as much as possible, but not to zero.
But regardless of who should or shouldn’t do meta-ethics, I claim that technical alignment is impossible with anything except naturalistic ethics. That is, to successfully technically align AI to anyone or anything, one must take on a naturalistic theory of ethics. This is because engineering success is defined in scientific terms, thus if you don’t treat ethics as a science (which is a synonym for ethical naturalism), you can’t say that you technically succeeded at alignment.
From the practical point of view, attempting to align AI to haphazard “values” or arbitrary “philosophical” theory of ethics rather than a coherent scientific theory of ethics seems bonkers, too.
AGIs can do it better at which point we should leave it to them.
AGI can definitely do it much faster. And it seems that this is the strategy of both OpenAI and Conjecture and quite possibly other AGI labs too, to first build AGIs and then task them with “solving alignment” rather than recursive self-improvement. I don’t try to estimate whether this strategy is better or worse than other strategies (at least in this post), I just take it as a premise because it seems very unlikely to me at this point that the aforementioned AGI labs will change their strategies, or that human AI alignment researchers will “solve alignment” faster than AGI will be built and try to solve it.
So it’s not a question of whether we “should leave it to them” (at least I don’t raise this question here), it’s a belief that AGI labs will leave it to them.
letting AGIs decide right and wrong would be a nightmare scenario for the human race.
In non-naturalistic meta-ethics—quite possibly. In naturalistic ethics—it’s not more “nightmarish” than letting AGI do science. Since it’s science, it’s assumed to be objective and checkable, including by humans. Even though human scientists cannot derive a naturalistic theory of ethics and converge on it in a short enough time, this absolutely doesn’t mean that the hypothetical naturalistic theory of ethics (that AGI will derive) will be impenetrably complex for humans. It very well might be reasonably accessible.
You seem to hold the position that:
Scientists and not philosophers should do meta-ethics and normative ethics, until
AGIs can do it better at which point we should leave it to them.
I don’t believe that scientists either have the inclination or the competence to do what you ask of them, and secondly that letting AGIs decide right and wrong would be a nightmare scenario for the human race.
Normative ethics—yes, because I gravitate towards ethical naturalism myself (see my discussion of scale-free ethics here), which is a part of the “package” of scientism.
A scientist (as a role, not a person!) “shouldn’t” do meta-ethics (that is, decide that ethical naturalism is the way to go), because the question of meta-ethics, and acceptance or rejection of such fundamental philosophical stances as scientism, or idealism, or postmodernism is outside of the scope of science, that is, cannot be settled with methods of science. Ultimately, every scientist must do at least a little bit of philosophy (of science), at which moment they assume the role of a philosopher. Scientism is a philosophy that maximises the scope of science as much as possible and minimises the scope of philosophy as much as possible, but not to zero.
But regardless of who should or shouldn’t do meta-ethics, I claim that technical alignment is impossible with anything except naturalistic ethics. That is, to successfully technically align AI to anyone or anything, one must take on a naturalistic theory of ethics. This is because engineering success is defined in scientific terms, thus if you don’t treat ethics as a science (which is a synonym for ethical naturalism), you can’t say that you technically succeeded at alignment.
From the practical point of view, attempting to align AI to haphazard “values” or arbitrary “philosophical” theory of ethics rather than a coherent scientific theory of ethics seems bonkers, too.
AGI can definitely do it much faster. And it seems that this is the strategy of both OpenAI and Conjecture and quite possibly other AGI labs too, to first build AGIs and then task them with “solving alignment” rather than recursive self-improvement. I don’t try to estimate whether this strategy is better or worse than other strategies (at least in this post), I just take it as a premise because it seems very unlikely to me at this point that the aforementioned AGI labs will change their strategies, or that human AI alignment researchers will “solve alignment” faster than AGI will be built and try to solve it.
So it’s not a question of whether we “should leave it to them” (at least I don’t raise this question here), it’s a belief that AGI labs will leave it to them.
In non-naturalistic meta-ethics—quite possibly. In naturalistic ethics—it’s not more “nightmarish” than letting AGI do science. Since it’s science, it’s assumed to be objective and checkable, including by humans. Even though human scientists cannot derive a naturalistic theory of ethics and converge on it in a short enough time, this absolutely doesn’t mean that the hypothetical naturalistic theory of ethics (that AGI will derive) will be impenetrably complex for humans. It very well might be reasonably accessible.