While having lower intelligence, humans may have bigger authority. And AIs terminal goals should be about assisting specifically humans too.
Ideally, sure, except that I don’t know of a way to make “assist humans” be a safe goal. So I’m advocating for a variant of “treat humans as you would want to be treated”, which I think can be trained
While having lower intelligence, humans may have bigger authority. And AIs terminal goals should be about assisting specifically humans too.
Ideally, sure, except that I don’t know of a way to make “assist humans” be a safe goal. So I’m advocating for a variant of “treat humans as you would want to be treated”, which I think can be trained