AI should already have a supergoal so it does not need “motivation”.
We know relatively little about what it takes to create a AGI. Saying that an AGI should have feature X or feature Y to be a functioning AGI is drawing to much conclusions from the data we have.
On the other hand we know that the architecture on which humans run produces “intelligence” so that at least one possible architecture that could be implemented in a computer.
Bootstraping AGI from Whole Brain Emulations is on of the ideas that is in discussion even on LessWrong.
We know relatively little about what it takes to create a AGI. Saying that an AGI should have feature X or feature Y to be a functioning AGI is drawing to much conclusions from the data we have.
On the other hand we know that the architecture on which humans run produces “intelligence” so that at least one possible architecture that could be implemented in a computer.
Bootstraping AGI from Whole Brain Emulations is on of the ideas that is in discussion even on LessWrong.