I suspect that in the quoted passage above, EY is stating that even if you are a moral realist, you can still believe in the possibility of a highly intelligent species/AI/whatever that does not share/respect your moral values. Nothing in the quoted passage suggests (to me) that EY is arguing for or believes in strong moral realism.
He might be attempting to broaden the reach of his arguments concerning the danger of unfriendly AGI by showing that even if moral realism were true, we should still be concerned about unfriendly AGI.
I suspect that in the quoted passage above, EY is stating that even if you are a moral realist, you can still believe in the possibility of a highly intelligent species/AI/whatever that does not share/respect your moral values. Nothing in the quoted passage suggests (to me) that EY is arguing for or believes in strong moral realism.
He might be attempting to broaden the reach of his arguments concerning the danger of unfriendly AGI by showing that even if moral realism were true, we should still be concerned about unfriendly AGI.