Even if their abstractions are different from ours, a key valuable thing is to predict us and thus model our abstractions, so even if you started out with alien abstractions you would then also want something closer to ours?
I am closer to John than Eliezer on this one so it is not a crux for me, but speaking just to this reasoning: it seems to me that by the time the AI is deciding via its alien concepts whether or not to simulate us, the issue is completely out of our hands and it doesn’t matter whether we can interpret its simulations. Why wouldn’t the concepts by which it would judge the simulation also be the concepts we want to govern in the first place?
I am closer to John than Eliezer on this one so it is not a crux for me, but speaking just to this reasoning: it seems to me that by the time the AI is deciding via its alien concepts whether or not to simulate us, the issue is completely out of our hands and it doesn’t matter whether we can interpret its simulations. Why wouldn’t the concepts by which it would judge the simulation also be the concepts we want to govern in the first place?