I think the problem might be that the distinction between the real world and the hypothetical world might not be logically defensible, in which case we have an ontology problem of awesome proportions on our hands.
I believe as much: for foundational study of decision-making, the notions of “real world” are useless, which is why we have to deal with “all mathematical structures”, somehow accessed through more manageable concepts (for which the best fit is logic, though that’s uncomfortable for many reasons).
(I’d still expect that it’s possible to extract some fuzzy outline of the concept of the “real world”, like it’s possible to vaguely define “chairs” or “anger”.)
(I’d still expect that it’s possible to extract some fuzzy outline of the concept of the “real world”, like it’s possible to vaguely define “chairs” or “anger”.)
Maybe. Though my intuition seems to point to a more fundamental role for “reality” in decisionmaking.
Evolution designed our primitive notions of decisionmaking in a context where there was a very clear and unique reality; why should there even be a clear and unique generalization to the new contexts, i.e. the set of all mathematical structures?
I predict that we’ll end up with a plethora of different kinds of decision theory, which lead to a whole random assortment of different practical recommendations, and the very finest of framing differences could push a person to act in completely different ways, with one exception being a decision theory that caches out the notion of reality, that will be relatively unique because of its relative similarity to our pretheoretic notions.
Evolution designed our primitive notions of decisionmaking in a context where there was a very clear and unique reality; why should there even be a clear and unique generalization to the new contexts, i.e. the set of all mathematical structures?
Generalization comes from the expressive power of a mind: you can think about all sorts of concepts beside the real world. That evolution would fail to delineate the real world in this concept space perfectly seems obvious: all sorts of good-fit approximations would do for its purposes, but when we are talking FAI, we have to deal with what was actually chosen, not what “was supposed to be chosen” by evolution. This argument applies to other evolutionary drives more easily.
I think you misunderstood me: I meant why should there even be a clear and unique generalization of human goals and decisionmaking to the case of preferences over the set of mathematical possibilities.
I did not mean why should there even be a clear and unique generalization of the human concept of reality—for the time being I was assuming that there wouldn’t be one.
I believe as much: for foundational study of decision-making, the notions of “real world” are useless, which is why we have to deal with “all mathematical structures”, somehow accessed through more manageable concepts (for which the best fit is logic, though that’s uncomfortable for many reasons).
(I’d still expect that it’s possible to extract some fuzzy outline of the concept of the “real world”, like it’s possible to vaguely define “chairs” or “anger”.)
Maybe. Though my intuition seems to point to a more fundamental role for “reality” in decisionmaking.
Evolution designed our primitive notions of decisionmaking in a context where there was a very clear and unique reality; why should there even be a clear and unique generalization to the new contexts, i.e. the set of all mathematical structures?
I predict that we’ll end up with a plethora of different kinds of decision theory, which lead to a whole random assortment of different practical recommendations, and the very finest of framing differences could push a person to act in completely different ways, with one exception being a decision theory that caches out the notion of reality, that will be relatively unique because of its relative similarity to our pretheoretic notions.
But I am willing to be proven wrong.
Generalization comes from the expressive power of a mind: you can think about all sorts of concepts beside the real world. That evolution would fail to delineate the real world in this concept space perfectly seems obvious: all sorts of good-fit approximations would do for its purposes, but when we are talking FAI, we have to deal with what was actually chosen, not what “was supposed to be chosen” by evolution. This argument applies to other evolutionary drives more easily.
I think you misunderstood me: I meant why should there even be a clear and unique generalization of human goals and decisionmaking to the case of preferences over the set of mathematical possibilities.
I did not mean why should there even be a clear and unique generalization of the human concept of reality—for the time being I was assuming that there wouldn’t be one.
You don’t try to generalize, or extrapolate human goals. You try to figure out what they already are.