Lera Boroditsky is one of the premier researchers on this topic. They’ve also done some excellent work on comparing spatial/time metaphors in English and Mandarin (?), showing that the dominant idioms in each language affect how people cognitively process motion.
But the question is more broad—whether some form of natural language is required (natural, roughly meaning used by a group in day to day life, is key here)? Differences between major natural languages are for the most part relatively superficial and translatable because their speakers are generally dealing with a similar reality.
Exploration is a very human activity, it’s in our DNA you might say. I don’t think we should take for granted that an AI would be as obsessed with expanding into space for that purpose.
Nor Is it obvious that it will want to continuously maximize its resources, at least on the galactic scale. This is also a very biological impulse—why should an AI have that built in?
When we talk about AI this way, I think we commit something like Descartes’s Error (see Damasio’s book of that name): thinking that the rational mind can function on its own. But our higher cognitive abilities are primed and driven by emotions and impulses, and when these are absent, one is unable to make even simple, instrumental decisions. In other words, before we assume anything about an AI’s behavior, we should consider its built in motivational structure.
I haven’t read Bostrom’s book so perhaps he makes a strong argument for these assumptions that I am not aware of, in which case, could some one summarize them?