The role of decision-theoretical notion of control is to present consequences of your possible decisions for evaluation by preference. Whatever fills that role, but if one can value mathematical abstractions, then the notion of control has to describe how to control abstractions. Conveniently, the real world can be seen as just another mathematical structure (class of structures).
I would say that the conventional usage of the word “control” requires the thing-under-control to be real, but sure, one can use the words how one pleases.
It worries me somewhat that we seem to concerned with what word-set we use here; this indicates that the degree to which we value performing certain actions depends whether we frame it as
“controlling something that’s no more-or-less real than the laptop in front of you”
versus
“this nonexistent abstraction happens to be a function of you; so what? There are infinitely many abstract functions of you”
This complication is created by the same old ontology problem: if preference talks about the real world, power to you (though that would make physics relevant, which is no good too), but if it doesn’t, we have to deal with that. And we can’t assume a priori what preference talks about.
My previous position (and, it seems, long-held position of Wei Dai’s) was to assume that preference can be expressed as talking about behavior of programs (as in UDT), since ultimately it has to determine behavior of agent’s program, and seeing the environment as programs fits the pattern and allows to express preferences that hold arbitrary agent’s strategies as the best option.
Now, since ambient decision theory (ADT) suggests treating the notions of consequences of agent’s decision as logical theories, it became more natural to see environment as models of those theories, and so structures more general than programs. But more importantly, if, as logical theories, preferred concepts do not refer to programs (even though they can directly influence only behavior of agent’s program), there is no easy way of converting them into preference-about-programs equivalents. Getting the info out of those theories may well be undecidable, something to work on during decision-making and not on the preliminary stage of preference-definition.
Also, trying to have preferences about abstractions, especially infinite ones, seems bound to end in tears, i.e. a complete mess of an ontology problem. You’d import all the problems of philosophy of mathematics in and heap them on top of the problems of ethics. Not to mention Godelian problems, large cardinal axiom problems, etc. Just the thought of trying to sort all that out fills me with dread.
Also, trying to have preferences about abstractions, especially infinite ones, seems bound to end in tears, i.e. a complete mess of an ontology problem. You’d import all the problems of philosophy of mathematics in and heap them on top of the problems of ethics. Not to mention Godelian problems, large cardinal axiom problems, etc. Just the thought of trying to sort all that out fills me with dread.
Scary, and I haven’t even finished converting myself into a pure mathematician yet. :-) I was hoping to avoid these issues by somehow limiting preference to programs, but investigation led me back to the harder problem statement. Ultimately, a simpler understanding has to be found, that sidesteps the monstrosity of set-theoretical infrastructure and diversity of logics. At this point though, I expect to benefit from conceptual clarity brought by standard mathematical tools.
This complication is created by the same old ontology problem: if preference talks about the real world, power to you, but if it doesn’t, we have to deal with that.
I think the problem might be that the distinction between the real world and the hypothetical world might not be logically defensible, in which case we have an ontology problem of awesome proportions on our hands.
I think the problem might be that the distinction between the real world and the hypothetical world might not be logically defensible, in which case we have an ontology problem of awesome proportions on our hands.
I believe as much: for foundational study of decision-making, the notions of “real world” are useless, which is why we have to deal with “all mathematical structures”, somehow accessed through more manageable concepts (for which the best fit is logic, though that’s uncomfortable for many reasons).
(I’d still expect that it’s possible to extract some fuzzy outline of the concept of the “real world”, like it’s possible to vaguely define “chairs” or “anger”.)
(I’d still expect that it’s possible to extract some fuzzy outline of the concept of the “real world”, like it’s possible to vaguely define “chairs” or “anger”.)
Maybe. Though my intuition seems to point to a more fundamental role for “reality” in decisionmaking.
Evolution designed our primitive notions of decisionmaking in a context where there was a very clear and unique reality; why should there even be a clear and unique generalization to the new contexts, i.e. the set of all mathematical structures?
I predict that we’ll end up with a plethora of different kinds of decision theory, which lead to a whole random assortment of different practical recommendations, and the very finest of framing differences could push a person to act in completely different ways, with one exception being a decision theory that caches out the notion of reality, that will be relatively unique because of its relative similarity to our pretheoretic notions.
Evolution designed our primitive notions of decisionmaking in a context where there was a very clear and unique reality; why should there even be a clear and unique generalization to the new contexts, i.e. the set of all mathematical structures?
Generalization comes from the expressive power of a mind: you can think about all sorts of concepts beside the real world. That evolution would fail to delineate the real world in this concept space perfectly seems obvious: all sorts of good-fit approximations would do for its purposes, but when we are talking FAI, we have to deal with what was actually chosen, not what “was supposed to be chosen” by evolution. This argument applies to other evolutionary drives more easily.
I think you misunderstood me: I meant why should there even be a clear and unique generalization of human goals and decisionmaking to the case of preferences over the set of mathematical possibilities.
I did not mean why should there even be a clear and unique generalization of the human concept of reality—for the time being I was assuming that there wouldn’t be one.
The role of decision-theoretical notion of control is to present consequences of your possible decisions for evaluation by preference. Whatever fills that role, but if one can value mathematical abstractions, then the notion of control has to describe how to control abstractions. Conveniently, the real world can be seen as just another mathematical structure (class of structures).
I would say that the conventional usage of the word “control” requires the thing-under-control to be real, but sure, one can use the words how one pleases.
It worries me somewhat that we seem to concerned with what word-set we use here; this indicates that the degree to which we value performing certain actions depends whether we frame it as
“controlling something that’s no more-or-less real than the laptop in front of you”
versus
“this nonexistent abstraction happens to be a function of you; so what? There are infinitely many abstract functions of you”
Is there some actual substance here?
This complication is created by the same old ontology problem: if preference talks about the real world, power to you (though that would make physics relevant, which is no good too), but if it doesn’t, we have to deal with that. And we can’t assume a priori what preference talks about.
My previous position (and, it seems, long-held position of Wei Dai’s) was to assume that preference can be expressed as talking about behavior of programs (as in UDT), since ultimately it has to determine behavior of agent’s program, and seeing the environment as programs fits the pattern and allows to express preferences that hold arbitrary agent’s strategies as the best option.
Now, since ambient decision theory (ADT) suggests treating the notions of consequences of agent’s decision as logical theories, it became more natural to see environment as models of those theories, and so structures more general than programs. But more importantly, if, as logical theories, preferred concepts do not refer to programs (even though they can directly influence only behavior of agent’s program), there is no easy way of converting them into preference-about-programs equivalents. Getting the info out of those theories may well be undecidable, something to work on during decision-making and not on the preliminary stage of preference-definition.
Also, trying to have preferences about abstractions, especially infinite ones, seems bound to end in tears, i.e. a complete mess of an ontology problem. You’d import all the problems of philosophy of mathematics in and heap them on top of the problems of ethics. Not to mention Godelian problems, large cardinal axiom problems, etc. Just the thought of trying to sort all that out fills me with dread.
Scary, and I haven’t even finished converting myself into a pure mathematician yet. :-) I was hoping to avoid these issues by somehow limiting preference to programs, but investigation led me back to the harder problem statement. Ultimately, a simpler understanding has to be found, that sidesteps the monstrosity of set-theoretical infrastructure and diversity of logics. At this point though, I expect to benefit from conceptual clarity brought by standard mathematical tools.
I think the problem might be that the distinction between the real world and the hypothetical world might not be logically defensible, in which case we have an ontology problem of awesome proportions on our hands.
I believe as much: for foundational study of decision-making, the notions of “real world” are useless, which is why we have to deal with “all mathematical structures”, somehow accessed through more manageable concepts (for which the best fit is logic, though that’s uncomfortable for many reasons).
(I’d still expect that it’s possible to extract some fuzzy outline of the concept of the “real world”, like it’s possible to vaguely define “chairs” or “anger”.)
Maybe. Though my intuition seems to point to a more fundamental role for “reality” in decisionmaking.
Evolution designed our primitive notions of decisionmaking in a context where there was a very clear and unique reality; why should there even be a clear and unique generalization to the new contexts, i.e. the set of all mathematical structures?
I predict that we’ll end up with a plethora of different kinds of decision theory, which lead to a whole random assortment of different practical recommendations, and the very finest of framing differences could push a person to act in completely different ways, with one exception being a decision theory that caches out the notion of reality, that will be relatively unique because of its relative similarity to our pretheoretic notions.
But I am willing to be proven wrong.
Generalization comes from the expressive power of a mind: you can think about all sorts of concepts beside the real world. That evolution would fail to delineate the real world in this concept space perfectly seems obvious: all sorts of good-fit approximations would do for its purposes, but when we are talking FAI, we have to deal with what was actually chosen, not what “was supposed to be chosen” by evolution. This argument applies to other evolutionary drives more easily.
I think you misunderstood me: I meant why should there even be a clear and unique generalization of human goals and decisionmaking to the case of preferences over the set of mathematical possibilities.
I did not mean why should there even be a clear and unique generalization of the human concept of reality—for the time being I was assuming that there wouldn’t be one.
You don’t try to generalize, or extrapolate human goals. You try to figure out what they already are.