This might for example be the case if the first AGI is the outcome of some sort of evolutionary process in which it competed with a vast number of other AGI designs and thereby evolve some sort of altruism, which in turn caused it to have some limited amount of compassion for humans and provide us with a share of the universe.
There will probably be major selection pressures by humans for safe machines that can act as nannies, assistants, etc.
Our relationship with machines looks set to start out on the right foot, mostly. Of course there will probably be some who lose their jobs and fail to keep up along the way.
Humans won’t get “a share of the universe”, though. Our pitch should be for our bodies to survive in the history simulations and for our minds to get uploaded.
There will probably be major selection pressures by humans for safe machines that can act as nannies, assistants, etc.
Our relationship with machines looks set to start out on the right foot, mostly. Of course there will probably be some who lose their jobs and fail to keep up along the way.
Humans won’t get “a share of the universe”, though. Our pitch should be for our bodies to survive in the history simulations and for our minds to get uploaded.