Any given concept is what it is. Truth about any given concept is not a matter of preference.
But in cases where there is no “canonical choice of a concept”, it is a matter of choice which concept to consider. If you want a concept with certain properties, these properties already define a concept of their own, and might determine the mathematical structure that satisfies them, or might leave some freedom in choosing one you prefer for the task.
In case of quantum mechanical measure, you want your concept of measure to produce “probabilities” that conform with the concept of subjective anticipation, which is fairly regular and thus create illusion of “universality”, because preferences of most minds like ours (evolved like ours, in our physics) have subjective anticipation as a natural category, a pattern that has significant explanatory (and hence, optimization) power. But subjective anticipation is still not a universally interesting concept, one can consider a mind that looks at your theories about it, says “so what?”, and goes on optimizing something else.
The reason I spoke about Mangled Worlds MWI is that the Integral[ ] measure is not dependent upon subjective anticipation.
This is because in mangled worlds QM there is a physically meaningful sense in which some things cease to exist, namely that things (people, computers, any complex or macroscopic phenomenon) get “Mangled” if their Integral[ ] measure gets too low.
That preference is a cause of a given choice doesn’t prohibit physics to also be a cause. There is rarely an ultimate source (unique dependence). You value thinking about what is real (accords with physical laws) because you evolved to value real things. There are also concepts which are not about our physical laws which you value, because evolution isn’t a perfect designer.
This is also a free will argument. I say that there is a decision to be made about which concepts to consider, and you say that the decision is already made by the laws of physics. It’s easier to see how you do have free will for more trivial choices. It’s more difficult to consider acting and thinking as if you live in different physics. In both cases, the counterfactual is physically impossible, you couldn’t have made a different choice. Your thoughts accord with the laws of physics, caused by physics, embedded within physics. And in both cases, what is actually true (what action you’ll perform; and what theories you’ll think about) is determined by your decision.
As an agent, you shouldn’t (terminally) care about what laws of physics say, only about what your preference says, so this cause is always more relevant, although currently less accessible to reflection.
Yes, I get that free will is compatible with deterministic physics. That is not the issue. I don’t quite see what about my reply made you think that this was relevant?
The point is that in Mangled world QM there is such a think as objective probability, even though the world is (relatively) big, and it basically turns out to be defined by just the number of instances of something rather than something else.
I think Vladimir is essentially saying that caring about that objective property of that particular mathematical structure is still your “arbitrary”, subjectively objective preference. I don’t think I understand where the free will argument comes in either.
Sure, it is arbitrary to care about what actually exists and what will actually happen, as opposed to (for example) running your life around trying to optimize the state of Tolkein’s Middle Earth.
But I think that what Big Worlds calls into question is whether there is such a thing as “what actually exists” and “what will actually happen”. That’s the problem. I agree that evolution could (like it did in the case of subjective anticipation and MWI QM) have played a really cruel trick on us.
But I brought up Mangled Worlds because it seems that Mangled worlds is a case where there is such a thing as “what will actually happen” and “what actually exists”, even though the world is relatively big (though mangled worlds is importantly different to MWI with no mangler or world-eater)
The important difference between MWI and Mangled-MWI is that if you say “ah, measure over a big world is part of preference, and my preference is for a ||Psi>|^10
measure, then you will very quickly end up mangled, i.e. there will be no branches of the wavefunction where your decision algorithm interacts with reality in the intended way for an extended period of time .
The important difference between MWI and Mangled-MWI is that if you say “ah, measure over a big world is part of preference, and my preference is for a ||Psi>|^10 measure, then you will very quickly end up mangled, i.e. there will be no branches of the wavefunction where your decision algorithm interacts with reality in the intended way for an extended period of time.
So what? Not everyone cares about what happens in this world. Plus, you don’t have to exist in this world to optimize it (though it helps).
If we take as an assumption that Mangled-worlds MWI is the only kind of “Bigness” that the world has, then there is nothing else to care about apart from what happens in one of the branches, and since nothing exists apart from those branches, you have to exist in at least one of them to actually do anything.
Though, of course, acausally speaking, a slim probability that some other world exists is enough for people to (perhaps?) take notice of it.
EDIT: One way to try to salvage objective reality from Big Worlds would be to drive a wedge between “other worlds that we have actual evidence for” (such as MWI) and “Other worlds that are in-principle incapable of providing positive evidence of their existence”, (such as Tegmark’s MUH), then showing that all of the evidentially implied big worlds are not problematic for objectivity, as seems to be the case for Mangled-MWI. However, this would only work if one were willing to part with kolmogorov/Bayesian reasoning, and say that certain perfectly low-complexity hypotheses are thrown out for being “too big” and “too hypothetical”.
If we take as an assumption that Mangled-worlds MWI is the only kind of “Bigness” that the world has, then there is nothing else to care about apart from what happens in one of the branches, and since nothing exists apart from those branches, you have to exist in at least one of them to actually do anything.
I’m fairly sure at this point it’s conceptual confusion to say that. You can care about mathematical structures, and control mathematical structures, that have nothing to do with the real world. These mathematical structures don’t have to be “worlds” in any usual sense, for example they don’t have to be processes (have time), and they don’t have to contain you in them in any form.
One of the next iterations of ambient decision theory should make it clearer, though the current version should give a hint (but probably isn’t worth the bother in the current form, considering it has known philosophical/mathematical bugs—but I’m studying, improving my mathematical sanity).
Perhaps the distinction I’m interested is the difference between control and function-ness.
There is an abstract mathematical function, say, the parity function of the number of open eyes I have. It is a function of me, but I wouldn’t say that I am controlling it in the conventional sense, because it is abstract.
I guess if there were an actual light that lit up as a function of the parity, then I would feel comfortable with “control”, and I would say that I am controlling the light
The role of decision-theoretical notion of control is to present consequences of your possible decisions for evaluation by preference. Whatever fills that role, but if one can value mathematical abstractions, then the notion of control has to describe how to control abstractions. Conveniently, the real world can be seen as just another mathematical structure (class of structures).
I would say that the conventional usage of the word “control” requires the thing-under-control to be real, but sure, one can use the words how one pleases.
It worries me somewhat that we seem to concerned with what word-set we use here; this indicates that the degree to which we value performing certain actions depends whether we frame it as
“controlling something that’s no more-or-less real than the laptop in front of you”
versus
“this nonexistent abstraction happens to be a function of you; so what? There are infinitely many abstract functions of you”
This complication is created by the same old ontology problem: if preference talks about the real world, power to you (though that would make physics relevant, which is no good too), but if it doesn’t, we have to deal with that. And we can’t assume a priori what preference talks about.
My previous position (and, it seems, long-held position of Wei Dai’s) was to assume that preference can be expressed as talking about behavior of programs (as in UDT), since ultimately it has to determine behavior of agent’s program, and seeing the environment as programs fits the pattern and allows to express preferences that hold arbitrary agent’s strategies as the best option.
Now, since ambient decision theory (ADT) suggests treating the notions of consequences of agent’s decision as logical theories, it became more natural to see environment as models of those theories, and so structures more general than programs. But more importantly, if, as logical theories, preferred concepts do not refer to programs (even though they can directly influence only behavior of agent’s program), there is no easy way of converting them into preference-about-programs equivalents. Getting the info out of those theories may well be undecidable, something to work on during decision-making and not on the preliminary stage of preference-definition.
Also, trying to have preferences about abstractions, especially infinite ones, seems bound to end in tears, i.e. a complete mess of an ontology problem. You’d import all the problems of philosophy of mathematics in and heap them on top of the problems of ethics. Not to mention Godelian problems, large cardinal axiom problems, etc. Just the thought of trying to sort all that out fills me with dread.
Also, trying to have preferences about abstractions, especially infinite ones, seems bound to end in tears, i.e. a complete mess of an ontology problem. You’d import all the problems of philosophy of mathematics in and heap them on top of the problems of ethics. Not to mention Godelian problems, large cardinal axiom problems, etc. Just the thought of trying to sort all that out fills me with dread.
Scary, and I haven’t even finished converting myself into a pure mathematician yet. :-) I was hoping to avoid these issues by somehow limiting preference to programs, but investigation led me back to the harder problem statement. Ultimately, a simpler understanding has to be found, that sidesteps the monstrosity of set-theoretical infrastructure and diversity of logics. At this point though, I expect to benefit from conceptual clarity brought by standard mathematical tools.
This complication is created by the same old ontology problem: if preference talks about the real world, power to you, but if it doesn’t, we have to deal with that.
I think the problem might be that the distinction between the real world and the hypothetical world might not be logically defensible, in which case we have an ontology problem of awesome proportions on our hands.
I think the problem might be that the distinction between the real world and the hypothetical world might not be logically defensible, in which case we have an ontology problem of awesome proportions on our hands.
I believe as much: for foundational study of decision-making, the notions of “real world” are useless, which is why we have to deal with “all mathematical structures”, somehow accessed through more manageable concepts (for which the best fit is logic, though that’s uncomfortable for many reasons).
(I’d still expect that it’s possible to extract some fuzzy outline of the concept of the “real world”, like it’s possible to vaguely define “chairs” or “anger”.)
(I’d still expect that it’s possible to extract some fuzzy outline of the concept of the “real world”, like it’s possible to vaguely define “chairs” or “anger”.)
Maybe. Though my intuition seems to point to a more fundamental role for “reality” in decisionmaking.
Evolution designed our primitive notions of decisionmaking in a context where there was a very clear and unique reality; why should there even be a clear and unique generalization to the new contexts, i.e. the set of all mathematical structures?
I predict that we’ll end up with a plethora of different kinds of decision theory, which lead to a whole random assortment of different practical recommendations, and the very finest of framing differences could push a person to act in completely different ways, with one exception being a decision theory that caches out the notion of reality, that will be relatively unique because of its relative similarity to our pretheoretic notions.
Evolution designed our primitive notions of decisionmaking in a context where there was a very clear and unique reality; why should there even be a clear and unique generalization to the new contexts, i.e. the set of all mathematical structures?
Generalization comes from the expressive power of a mind: you can think about all sorts of concepts beside the real world. That evolution would fail to delineate the real world in this concept space perfectly seems obvious: all sorts of good-fit approximations would do for its purposes, but when we are talking FAI, we have to deal with what was actually chosen, not what “was supposed to be chosen” by evolution. This argument applies to other evolutionary drives more easily.
I think you misunderstood me: I meant why should there even be a clear and unique generalization of human goals and decisionmaking to the case of preferences over the set of mathematical possibilities.
I did not mean why should there even be a clear and unique generalization of the human concept of reality—for the time being I was assuming that there wouldn’t be one.
I think that this is a different sense of the word “control” than controlling physical things.
UDT is about control in the same sense. See this comment for a point in that direction (and my last comment on “Ambient decision theory go-through” thread on SIAI DT list). I believe this to be conceptual clarification of the usual notion of control, having the usual notion (“explicit control”) as a special case (almost, modulo explicit dependence bias—it allows to get better results than if you only consider the explicit dependence as stated).
they don’t have to contain you in them in any form.
Can you elaborate on this?
See “ambient dependence” on DT list, but the current notion (involving mathematical structures more general than programs) is not written up. I believe “logical control”, as used by Wei/Eliezer, refers to basically the same idea. In a two-player game, you can control the other player’s decisions despite not literally sitting inside their head.
I’m not on that list. Do you know who the list owner is?
Just as a note, my current gut feeling is that it is perfectly plausible that the right way to go is to do something like UDT but with a notion of what worlds are real (as in Mangled worlds QM).
However, I shall read your theory of controlling that which is unreal and see what I make of it!
Sure, it is arbitrary to care about what actually exists and what will actually happen, as opposed to (for example) running your life around trying to optimize the state of Tolkein’s Middle Earth.
But you do care about optimizing Middle Earth (let it be Middle Earth with Halting Oracles to be sure), to some tiny extent, even though it doesn’t exist at all.
Free will is about dependencies: one got to say that the outcome depends on your decision. At the same time, outcome depends on other things. Here, considering quantum mechanical measure depends on what’s true about the world, but at the same time it depends on what you prefer to consider. Thus, saying that there are objective facts dictated by the laws of physics is analogous to saying that all your decisions are already determined by the physical laws.
My argument was that as in the case of the naive free will argument, here too we can (indeed, should, once we get to the point of being able to tell the difference) see physical laws as (subjectively) chosen. Of course, as you can’t change your own preference, you can’t change the implied physical laws seen as aspect of that preference (to make them nicer for some purpose, say).
Yes, I get that free will is compatible with deterministic physics. That is not the issue. I don’t quite see what about my reply made you think that this was relevant?
It is relevant, but I ran out of expectation to communicate this quickly, so let’s all hope I figure out and write up in detail my philosophical framework for decision theory sometime soon.
In the case of MWI quantum, the measure is Integral[ ], and if Robin’s Mangled Worlds is true, there’s no doubt that this measure is not “preference”.
What is the difference between the MWI/Mangled Big World and other Big Worlds such that measure is preference in others but not in MWI/Mangled?
Any given concept is what it is. Truth about any given concept is not a matter of preference.
But in cases where there is no “canonical choice of a concept”, it is a matter of choice which concept to consider. If you want a concept with certain properties, these properties already define a concept of their own, and might determine the mathematical structure that satisfies them, or might leave some freedom in choosing one you prefer for the task.
In case of quantum mechanical measure, you want your concept of measure to produce “probabilities” that conform with the concept of subjective anticipation, which is fairly regular and thus create illusion of “universality”, because preferences of most minds like ours (evolved like ours, in our physics) have subjective anticipation as a natural category, a pattern that has significant explanatory (and hence, optimization) power. But subjective anticipation is still not a universally interesting concept, one can consider a mind that looks at your theories about it, says “so what?”, and goes on optimizing something else.
The reason I spoke about Mangled Worlds MWI is that the Integral[ ] measure is not dependent upon subjective anticipation.
This is because in mangled worlds QM there is a physically meaningful sense in which some things cease to exist, namely that things (people, computers, any complex or macroscopic phenomenon) get “Mangled” if their Integral[ ] measure gets too low.
That preference is a cause of a given choice doesn’t prohibit physics to also be a cause. There is rarely an ultimate source (unique dependence). You value thinking about what is real (accords with physical laws) because you evolved to value real things. There are also concepts which are not about our physical laws which you value, because evolution isn’t a perfect designer.
This is also a free will argument. I say that there is a decision to be made about which concepts to consider, and you say that the decision is already made by the laws of physics. It’s easier to see how you do have free will for more trivial choices. It’s more difficult to consider acting and thinking as if you live in different physics. In both cases, the counterfactual is physically impossible, you couldn’t have made a different choice. Your thoughts accord with the laws of physics, caused by physics, embedded within physics. And in both cases, what is actually true (what action you’ll perform; and what theories you’ll think about) is determined by your decision.
As an agent, you shouldn’t (terminally) care about what laws of physics say, only about what your preference says, so this cause is always more relevant, although currently less accessible to reflection.
Yes, I get that free will is compatible with deterministic physics. That is not the issue. I don’t quite see what about my reply made you think that this was relevant?
The point is that in Mangled world QM there is such a think as objective probability, even though the world is (relatively) big, and it basically turns out to be defined by just the number of instances of something rather than something else.
I think Vladimir is essentially saying that caring about that objective property of that particular mathematical structure is still your “arbitrary”, subjectively objective preference. I don’t think I understand where the free will argument comes in either.
Sure, it is arbitrary to care about what actually exists and what will actually happen, as opposed to (for example) running your life around trying to optimize the state of Tolkein’s Middle Earth.
But I think that what Big Worlds calls into question is whether there is such a thing as “what actually exists” and “what will actually happen”. That’s the problem. I agree that evolution could (like it did in the case of subjective anticipation and MWI QM) have played a really cruel trick on us.
But I brought up Mangled Worlds because it seems that Mangled worlds is a case where there is such a thing as “what will actually happen” and “what actually exists”, even though the world is relatively big (though mangled worlds is importantly different to MWI with no mangler or world-eater)
The important difference between MWI and Mangled-MWI is that if you say “ah, measure over a big world is part of preference, and my preference is for a ||Psi>|^10 measure, then you will very quickly end up mangled, i.e. there will be no branches of the wavefunction where your decision algorithm interacts with reality in the intended way for an extended period of time .
So what? Not everyone cares about what happens in this world. Plus, you don’t have to exist in this world to optimize it (though it helps).
If we take as an assumption that Mangled-worlds MWI is the only kind of “Bigness” that the world has, then there is nothing else to care about apart from what happens in one of the branches, and since nothing exists apart from those branches, you have to exist in at least one of them to actually do anything.
Though, of course, acausally speaking, a slim probability that some other world exists is enough for people to (perhaps?) take notice of it.
EDIT: One way to try to salvage objective reality from Big Worlds would be to drive a wedge between “other worlds that we have actual evidence for” (such as MWI) and “Other worlds that are in-principle incapable of providing positive evidence of their existence”, (such as Tegmark’s MUH), then showing that all of the evidentially implied big worlds are not problematic for objectivity, as seems to be the case for Mangled-MWI. However, this would only work if one were willing to part with kolmogorov/Bayesian reasoning, and say that certain perfectly low-complexity hypotheses are thrown out for being “too big” and “too hypothetical”.
I’m fairly sure at this point it’s conceptual confusion to say that. You can care about mathematical structures, and control mathematical structures, that have nothing to do with the real world. These mathematical structures don’t have to be “worlds” in any usual sense, for example they don’t have to be processes (have time), and they don’t have to contain you in them in any form.
One of the next iterations of ambient decision theory should make it clearer, though the current version should give a hint (but probably isn’t worth the bother in the current form, considering it has known philosophical/mathematical bugs—but I’m studying, improving my mathematical sanity).
Perhaps the distinction I’m interested is the difference between control and function-ness.
There is an abstract mathematical function, say, the parity function of the number of open eyes I have. It is a function of me, but I wouldn’t say that I am controlling it in the conventional sense, because it is abstract.
More abstract than whether your eyes are open? They’re about the same distance from the underlying physics.
I guess if there were an actual light that lit up as a function of the parity, then I would feel comfortable with “control”, and I would say that I am controlling the light
… Whether the light is on is also pretty abstract, no?
The role of decision-theoretical notion of control is to present consequences of your possible decisions for evaluation by preference. Whatever fills that role, but if one can value mathematical abstractions, then the notion of control has to describe how to control abstractions. Conveniently, the real world can be seen as just another mathematical structure (class of structures).
I would say that the conventional usage of the word “control” requires the thing-under-control to be real, but sure, one can use the words how one pleases.
It worries me somewhat that we seem to concerned with what word-set we use here; this indicates that the degree to which we value performing certain actions depends whether we frame it as
“controlling something that’s no more-or-less real than the laptop in front of you”
versus
“this nonexistent abstraction happens to be a function of you; so what? There are infinitely many abstract functions of you”
Is there some actual substance here?
This complication is created by the same old ontology problem: if preference talks about the real world, power to you (though that would make physics relevant, which is no good too), but if it doesn’t, we have to deal with that. And we can’t assume a priori what preference talks about.
My previous position (and, it seems, long-held position of Wei Dai’s) was to assume that preference can be expressed as talking about behavior of programs (as in UDT), since ultimately it has to determine behavior of agent’s program, and seeing the environment as programs fits the pattern and allows to express preferences that hold arbitrary agent’s strategies as the best option.
Now, since ambient decision theory (ADT) suggests treating the notions of consequences of agent’s decision as logical theories, it became more natural to see environment as models of those theories, and so structures more general than programs. But more importantly, if, as logical theories, preferred concepts do not refer to programs (even though they can directly influence only behavior of agent’s program), there is no easy way of converting them into preference-about-programs equivalents. Getting the info out of those theories may well be undecidable, something to work on during decision-making and not on the preliminary stage of preference-definition.
Also, trying to have preferences about abstractions, especially infinite ones, seems bound to end in tears, i.e. a complete mess of an ontology problem. You’d import all the problems of philosophy of mathematics in and heap them on top of the problems of ethics. Not to mention Godelian problems, large cardinal axiom problems, etc. Just the thought of trying to sort all that out fills me with dread.
Scary, and I haven’t even finished converting myself into a pure mathematician yet. :-) I was hoping to avoid these issues by somehow limiting preference to programs, but investigation led me back to the harder problem statement. Ultimately, a simpler understanding has to be found, that sidesteps the monstrosity of set-theoretical infrastructure and diversity of logics. At this point though, I expect to benefit from conceptual clarity brought by standard mathematical tools.
I think the problem might be that the distinction between the real world and the hypothetical world might not be logically defensible, in which case we have an ontology problem of awesome proportions on our hands.
I believe as much: for foundational study of decision-making, the notions of “real world” are useless, which is why we have to deal with “all mathematical structures”, somehow accessed through more manageable concepts (for which the best fit is logic, though that’s uncomfortable for many reasons).
(I’d still expect that it’s possible to extract some fuzzy outline of the concept of the “real world”, like it’s possible to vaguely define “chairs” or “anger”.)
Maybe. Though my intuition seems to point to a more fundamental role for “reality” in decisionmaking.
Evolution designed our primitive notions of decisionmaking in a context where there was a very clear and unique reality; why should there even be a clear and unique generalization to the new contexts, i.e. the set of all mathematical structures?
I predict that we’ll end up with a plethora of different kinds of decision theory, which lead to a whole random assortment of different practical recommendations, and the very finest of framing differences could push a person to act in completely different ways, with one exception being a decision theory that caches out the notion of reality, that will be relatively unique because of its relative similarity to our pretheoretic notions.
But I am willing to be proven wrong.
Generalization comes from the expressive power of a mind: you can think about all sorts of concepts beside the real world. That evolution would fail to delineate the real world in this concept space perfectly seems obvious: all sorts of good-fit approximations would do for its purposes, but when we are talking FAI, we have to deal with what was actually chosen, not what “was supposed to be chosen” by evolution. This argument applies to other evolutionary drives more easily.
I think you misunderstood me: I meant why should there even be a clear and unique generalization of human goals and decisionmaking to the case of preferences over the set of mathematical possibilities.
I did not mean why should there even be a clear and unique generalization of the human concept of reality—for the time being I was assuming that there wouldn’t be one.
You don’t try to generalize, or extrapolate human goals. You try to figure out what they already are.
I think that this is a different sense of the word “control” than controlling physical things.
Can you elaborate on this?
UDT is about control in the same sense. See this comment for a point in that direction (and my last comment on “Ambient decision theory go-through” thread on SIAI DT list). I believe this to be conceptual clarification of the usual notion of control, having the usual notion (“explicit control”) as a special case (almost, modulo explicit dependence bias—it allows to get better results than if you only consider the explicit dependence as stated).
See “ambient dependence” on DT list, but the current notion (involving mathematical structures more general than programs) is not written up. I believe “logical control”, as used by Wei/Eliezer, refers to basically the same idea. In a two-player game, you can control the other player’s decisions despite not literally sitting inside their head.
I just accidentally found this other decision theory google group and thought LWers might find it of interest.
I’m not on that list. Do you know who the list owner is?
Just as a note, my current gut feeling is that it is perfectly plausible that the right way to go is to do something like UDT but with a notion of what worlds are real (as in Mangled worlds QM).
However, I shall read your theory of controlling that which is unreal and see what I make of it!
Yes you are (via r****c at googlemail.com). IIRC, you got there after I sent you an invitation. Try logging in on the list page.
Oh, thanks. Obviously I accepted and forgot about it.
But you do care about optimizing Middle Earth (let it be Middle Earth with Halting Oracles to be sure), to some tiny extent, even though it doesn’t exist at all.
Free will is about dependencies: one got to say that the outcome depends on your decision. At the same time, outcome depends on other things. Here, considering quantum mechanical measure depends on what’s true about the world, but at the same time it depends on what you prefer to consider. Thus, saying that there are objective facts dictated by the laws of physics is analogous to saying that all your decisions are already determined by the physical laws.
My argument was that as in the case of the naive free will argument, here too we can (indeed, should, once we get to the point of being able to tell the difference) see physical laws as (subjectively) chosen. Of course, as you can’t change your own preference, you can’t change the implied physical laws seen as aspect of that preference (to make them nicer for some purpose, say).
It is relevant, but I ran out of expectation to communicate this quickly, so let’s all hope I figure out and write up in detail my philosophical framework for decision theory sometime soon.