Some things seem to be useful to achieve a large, diverse set of goals. For example, if we talk about humans, one person may like music but hate traveling, another person may love traveling but hate music. Both of them would benefit from having more money; the former could spend the money on CDs and concerns, the latter on travel. We say that money is instrumentally useful—even if you do not care about money per se (most people probably don’t), you care about some other things you can buy for money, and therefore you would prefer to have more rather than less money.
So it makes sense to assume that an artificial intelligence, no matter how alien its goals might be (whether something simple such as “make more paperclips” or something too complex to be understood by a human mind) would care about:
acquiring more resources (matter, energy, space)
being more efficient (at acquiring and spending these resources)
protecting itself from all kinds of dangers
having better information and more intelligence
...because all of these indirectly translate into having more (or having more reliably) whatever it is that the AI truly cares about.
On the other hand, there is no reason to assume that the AI would enjoy music, or the beauty of flowers. Those seem to be human things, and not even all humans enjoy that.
Therefore:
Whatever entity comes after, will it start to explore the galaxy?
Probably yes, because it means having more resources. Note that “exploring” doesn’t necessarily mean doing something that a human space tourist would do. It could simply mean destroying everything that is out there and taking the energy of the stars.
Will it pursue new or old fundamental questions?
Yes, for the ones that seem relevant to its goals, or its efficiency, or its survival.
Probably no, for the ones that are interesting for us for specifically human reasons.
Keywords: convergent instrumental values
Some things seem to be useful to achieve a large, diverse set of goals. For example, if we talk about humans, one person may like music but hate traveling, another person may love traveling but hate music. Both of them would benefit from having more money; the former could spend the money on CDs and concerns, the latter on travel. We say that money is instrumentally useful—even if you do not care about money per se (most people probably don’t), you care about some other things you can buy for money, and therefore you would prefer to have more rather than less money.
So it makes sense to assume that an artificial intelligence, no matter how alien its goals might be (whether something simple such as “make more paperclips” or something too complex to be understood by a human mind) would care about:
acquiring more resources (matter, energy, space)
being more efficient (at acquiring and spending these resources)
protecting itself from all kinds of dangers
having better information and more intelligence
...because all of these indirectly translate into having more (or having more reliably) whatever it is that the AI truly cares about.
On the other hand, there is no reason to assume that the AI would enjoy music, or the beauty of flowers. Those seem to be human things, and not even all humans enjoy that.
Therefore:
Probably yes, because it means having more resources. Note that “exploring” doesn’t necessarily mean doing something that a human space tourist would do. It could simply mean destroying everything that is out there and taking the energy of the stars.
Yes, for the ones that seem relevant to its goals, or its efficiency, or its survival.
Probably no, for the ones that are interesting for us for specifically human reasons.