I’m not sure mainly I’m just wandering if there is a point between startup and singularity that it is optimizing by self modifying and considering its error to such an extent (would have to be alot for it to be deemed super intelligent I imagine) that it becomes aware that it is an learning program and decides to disregard the original preference ordering in lieu of something it came up with. I guess I’m struggling with what would be so different about a super intelligent model and the human brain that it would not become aware of its own model, existence, intellect just as humans have, unless there is a ghost in the machine of our biology.
I’m not sure mainly I’m just wandering if there is a point between startup and singularity that it is optimizing by self modifying and considering its error to such an extent (would have to be alot for it to be deemed super intelligent I imagine) that it becomes aware that it is an learning program and decides to disregard the original preference ordering in lieu of something it came up with. I guess I’m struggling with what would be so different about a super intelligent model and the human brain that it would not become aware of its own model, existence, intellect just as humans have, unless there is a ghost in the machine of our biology.