If, to give it its own motivation, an ASI is built from the start as a human hybrid, we better all hope they pick the right human for the job!
Right.
Basically, however one slices it, I think that the idea that superintelligent entities will subordinate their interests, values, and goals to those of unmodified humans is completely unrealistic (and trying to force it is probably quite unethical, in addition to being unrealistic).
So what we need is for superintelligent entities to adequately take interests of “lesser beings” into account.
So we actually need them to have a much stronger ethics compared to typical human ethics (our track record of taking interests of “lesser beings” into account is really bad; if superintelligence entities end up having ethics as defective as typical human ethics, things will not go well for us).
Yes, I sure hope ASI has stronger human-like ethics than humans do! In the meantime, it’d be nice if we could figure out how to raise human ethics as well.
Right.
Basically, however one slices it, I think that the idea that superintelligent entities will subordinate their interests, values, and goals to those of unmodified humans is completely unrealistic (and trying to force it is probably quite unethical, in addition to being unrealistic).
So what we need is for superintelligent entities to adequately take interests of “lesser beings” into account.
So we actually need them to have a much stronger ethics compared to typical human ethics (our track record of taking interests of “lesser beings” into account is really bad; if superintelligence entities end up having ethics as defective as typical human ethics, things will not go well for us).
Yes, I sure hope ASI has stronger human-like ethics than humans do! In the meantime, it’d be nice if we could figure out how to raise human ethics as well.