OK, what about the case where there’s a CEV theory which can extrapolate the volition of all humans, or a subset of them? It’s not suicide for you to tell the AI “coherently extrapolate my volition/the shareholders’ volition”. But it might be hell for the people whose interests aren’t taken into account.
At that point, that particular company wouldn’t be able to build the AI any faster than other companies, so at that point it’s just a matter of getting an FAI out there first and have it optimize rapidly enough that it could destroy any UFAI that come along after.
OK, what about the case where there’s a CEV theory which can extrapolate the volition of all humans, or a subset of them? It’s not suicide for you to tell the AI “coherently extrapolate my volition/the shareholders’ volition”. But it might be hell for the people whose interests aren’t taken into account.
At that point, that particular company wouldn’t be able to build the AI any faster than other companies, so at that point it’s just a matter of getting an FAI out there first and have it optimize rapidly enough that it could destroy any UFAI that come along after.