Exploration is a very human activity, it’s in our DNA you might say. I don’t think we should take for granted that an AI would be as obsessed with expanding into space for that purpose.
Nor Is it obvious that it will want to continuously maximize its resources, at least on the galactic scale. This is also a very biological impulse—why should an AI have that built in?
When we talk about AI this way, I think we commit something like Descartes’s Error (see Damasio’s book of that name): thinking that the rational mind can function on its own. But our higher cognitive abilities are primed and driven by emotions and impulses, and when these are absent, one is unable to make even simple, instrumental decisions. In other words, before we assume anything about an AI’s behavior, we should consider its built in motivational structure.
I haven’t read Bostrom’s book so perhaps he makes a strong argument for these assumptions that I am not aware of, in which case, could some one summarize them?
Good question. The basic argument is that whatever an AI (or any creature) values, more resources are very likely to be useful for that goal. For instance, if it just wants to calculate whether large numbers are prime or not, it will do this much better if it has more resources to devote to calculation. This is elaborated somewhat in papers by Omohundro and Bostrom.
That is, while exploration and resource acquisition are in our DNA, there is a very strong reason for them to be there, so they are likely to be in the DNA-analog of any successful general goal-seeking creature.
Exploration is a very human activity, it’s in our DNA you might say. I don’t think we should take for granted that an AI would be as obsessed with expanding into space for that purpose.
Nor Is it obvious that it will want to continuously maximize its resources, at least on the galactic scale. This is also a very biological impulse—why should an AI have that built in?
When we talk about AI this way, I think we commit something like Descartes’s Error (see Damasio’s book of that name): thinking that the rational mind can function on its own. But our higher cognitive abilities are primed and driven by emotions and impulses, and when these are absent, one is unable to make even simple, instrumental decisions. In other words, before we assume anything about an AI’s behavior, we should consider its built in motivational structure.
I haven’t read Bostrom’s book so perhaps he makes a strong argument for these assumptions that I am not aware of, in which case, could some one summarize them?
Good question. The basic argument is that whatever an AI (or any creature) values, more resources are very likely to be useful for that goal. For instance, if it just wants to calculate whether large numbers are prime or not, it will do this much better if it has more resources to devote to calculation. This is elaborated somewhat in papers by Omohundro and Bostrom.
That is, while exploration and resource acquisition are in our DNA, there is a very strong reason for them to be there, so they are likely to be in the DNA-analog of any successful general goal-seeking creature.