Evolution designed our primitive notions of decisionmaking in a context where there was a very clear and unique reality; why should there even be a clear and unique generalization to the new contexts, i.e. the set of all mathematical structures?
Generalization comes from the expressive power of a mind: you can think about all sorts of concepts beside the real world. That evolution would fail to delineate the real world in this concept space perfectly seems obvious: all sorts of good-fit approximations would do for its purposes, but when we are talking FAI, we have to deal with what was actually chosen, not what “was supposed to be chosen” by evolution. This argument applies to other evolutionary drives more easily.
I think you misunderstood me: I meant why should there even be a clear and unique generalization of human goals and decisionmaking to the case of preferences over the set of mathematical possibilities.
I did not mean why should there even be a clear and unique generalization of the human concept of reality—for the time being I was assuming that there wouldn’t be one.
Generalization comes from the expressive power of a mind: you can think about all sorts of concepts beside the real world. That evolution would fail to delineate the real world in this concept space perfectly seems obvious: all sorts of good-fit approximations would do for its purposes, but when we are talking FAI, we have to deal with what was actually chosen, not what “was supposed to be chosen” by evolution. This argument applies to other evolutionary drives more easily.
I think you misunderstood me: I meant why should there even be a clear and unique generalization of human goals and decisionmaking to the case of preferences over the set of mathematical possibilities.
I did not mean why should there even be a clear and unique generalization of the human concept of reality—for the time being I was assuming that there wouldn’t be one.
You don’t try to generalize, or extrapolate human goals. You try to figure out what they already are.