There is in fact no guarantee that the values of an uploaded mind will remain complex.
There is also no guarantee that the values of a mind implemented in meat will remain complex as we gain more understanding of and control over our own brains.
The term “value drift” is sometimes used around here to refer to the process whereby the values of intelligent systems change over time, regardless of the infrastructure on which those systems are implemented. It’s generally seen as a risk that needs to be addressed for any sufficiently powerful self-modifying system, though harder to address for human minds than for sensibly designed systems.
There is in fact no guarantee that the values of an uploaded mind will remain complex.
There is also no guarantee that the values of a mind implemented in meat will remain complex as we gain more understanding of and control over our own brains.
The term “value drift” is sometimes used around here to refer to the process whereby the values of intelligent systems change over time, regardless of the infrastructure on which those systems are implemented. It’s generally seen as a risk that needs to be addressed for any sufficiently powerful self-modifying system, though harder to address for human minds than for sensibly designed systems.
You might find the discussion here relevant.