If we create AI around human upload, or a model of human mind, it solves some of the problems:
1) It will, by definition, have the same values and the same value structure as a human being; in short, – human uploading solves value uploading.
2) It will be also not an agent
3) We could predict human upload behaviour based on our experience with predicting human behaviour.
And it will be not very powerful or very capable to strong self-improvement, because of the messy internal structure.
However, it could still be above human level because of acceleration of hardware and some tweaking. Using it we could construct primitive AI Police or AI Nanny, which will prevent the creation of any other types of AIs.
If we make a model of a specific human, – for example, morally sane and rationally educated person with an excellent understanding of all said above, he could choose the right level self-improving, as he will understand dangers of becoming too much instrumental goals orientated agent. I don’t know any such person in real life, btw.
If we create AI around human upload, or a model of human mind, it solves some of the problems:
1) It will, by definition, have the same values and the same value structure as a human being; in short, – human uploading solves value uploading.
2) It will be also not an agent
3) We could predict human upload behaviour based on our experience with predicting human behaviour.
And it will be not very powerful or very capable to strong self-improvement, because of the messy internal structure.
However, it could still be above human level because of acceleration of hardware and some tweaking. Using it we could construct primitive AI Police or AI Nanny, which will prevent the creation of any other types of AIs.
Convergent instrumental goals would make agent-like things become agents if they can self-modify (humans can’t do this to any strong extent).
If we make a model of a specific human, – for example, morally sane and rationally educated person with an excellent understanding of all said above, he could choose the right level self-improving, as he will understand dangers of becoming too much instrumental goals orientated agent. I don’t know any such person in real life, btw.