Well, of course there are both superintelligences and magical gods out there in the math, including those that watch over you in particular, with conceptual existence that I agree is not fundamentally different from our own, but they are presently irrelevant to us, just as the world where I win the lottery is irrelevant to me, even though a possibility.
It currently seems to me that many of such scenarios are irrelevant not because of “low probability” (as in the lottery case; different abstract facts coexist, so don’t vie for probability mass) or moral irrelevance of any kind (the worlds with nothing possibly of value), but because of other reasons that prevent us from exerting significant consequentialist control over them. The ability to see the possible consequences (and respond to this dependence) is the step missing, even though your actions do control those scenarios, just in a non-consequentialist manner.
(It does add up to atheism, as a modest claim about our own world, the “real world”, that it’s intended to be. In pursuit of “steelmanning” theism you seem to have come up with a strawman atheism...)
I don’t know if this is what Will has in mind- but it seems plausible that the super intelligences and gods that would be watching out for us might attempt to maximize the instantiations of our algorithms that are under their domain, so that as great a proportion of our future selves as possible will be saved (this story is vaguely Leibnizian). But I don’t know that such superbeings would be capable of overcoming their own sheer unlikelihood (though perhaps some subset of such superbeings have infinite capacity to create copies of us?). You can derive a self-interested ethics from this too- if you think you’ll be rewarded or punished by the simulator. The choices of the simulators could be further constrained by simulators above them—we would need an additional step to show that the equilibrium is benevolent (especially given the existence of evil in our universe).
But I’m not at all convinced Tegmark Level 4 isn’t utter nonsense. There is big step from accepting that abstract objects exist to accepting that all possible abstract objects are instantiated. And can we calculate anthropic probabilities from infinities of different magnitudes?
There is big step from accepting that abstract objects exist to accepting that all possible abstract objects are instantiated.
I’d rather say that the so-called “instantiated” objects are no different from the abstract ones, that in reality, there is no fundamental property of being real, there is only a natural category humans use to designate the stuff of normal physics, a definition that can be useful in some cases, but not always.
So there are easy ways to explain this idea at least, right? Humans’ decisions are affected by “counterfactual” futures all the time when planning, and so the counterfactuals have influence, and it’s hard for us to get a notion of existence outside of such influence besides a general naive physicalist one. I guess the not-easy-to-explain parts are about decision theoretic zombies where things seem like they ‘physically exist’ as much as anything else despite exerting less influence, because that clashes more with our naive physicalist intuitions? Not to say that these bizarre philosophical ideas aren’t confused (e.g. maybe because influence is spread around in a more egalitarian way than it naively feels like), but they don’t seem to be confusing as such.
Humans’ decisions are affected by “counterfactual” futures all the time when planning, and so the counterfactuals have influence
Human decisions are affected by thoughts about counterfactuals. So the question is, what is the nature of the influence that the “content” or “object” of a thought, has on the thought?
I do not believe that when human beings try to think about possible worlds, that these possible worlds have any causal effect in any way on the course of the thinking. The thinking and the causes of the thinking are strictly internal to the “world” in which the thinking occurs. The thinking mind instead engages in an entirely speculative and inferential attempt to guess or feel out the structure of possibillity—but this feeling out does not in any way involve causal contact with other worlds or divergent futures. It is all about an interplay between internally generated partial representations, and a sense of what is possible, impossible, logically necessary, etc in an imagined scenario; but the “sensory input” to these judgments consists of the imagining of possibilities, not the possibilities themselves.
How likely what is? There doesn’t appear to be a factual distinction, just what I find to be a more natural way of looking at things, for multiple purposes.
I believe that “exists” doesn’t mean anything fundamentally significant (in senses other than referring to presence of a property of some fact; or referring to the physical world; or its technical meanings in logic), so I don’t understand what it would mean for various (abstract) things to exist to greater or lower extent.
That would require understanding alternatives, which I currently don’t. The belief in question is mostly asserting confusion, and as such it isn’t much use, other than as a starting point that doesn’t purport to explain what I don’t understand.
No, I won’t see that in itself as a reason to be wary, since as I said repeatedly I don’t know how to parse the property of something being real in this sense.
Anyone who has positive accounts of existentness to put forth, I’d like to hear them. (E.g., Eliezer has talked about this related existentness-like-thing that has do with being in a causal graph (being computed), but I’m not sure if that’s just physicalist intuition admitting much confusion or if it’s supposed to be serious theoretical speculation caused by interesting underlying motivations that weren’t made explicit.)
Different abstract facts aren’t mutually exclusive, so one can’t compare them by “probability”, just as you won’t compare probability of Moscow with probability of New York. It seems to make sense to ask about probability of various facts being a certain way (in certain mutually exclusive possible states), or about probability of joint facts (that is, dependencies between facts) being a certain way, but it doesn’t seem to me that asking about probabilities of different facts in themselves is a sensible idea.
(Universal prior, for example, can be applied to talk about the joint probability distribution over the possible states of a particular sequence of past and future observations, that describes a single fact of the history of observations by one agent.)
Different abstract facts aren’t mutually exclusive, so one can’t compare them by “probability”, just as you won’t compare probability of Moscow with probability of New York.
You just prompted me to make that comparison. I’ve been to New York. I haven’t been to Moscow. I’ve also met more people who have talked about what they do in New York than I have people who talk about Moscow. I assign at least ten times as much confidence to New York as I do Moscow. Both those probabilities happen to be well above 99%. I don’t see any problem with comparing them just so long as I don’t conclude anything stupid based on that comparison.
There’s a point behind what you are saying here—and an important point at that—just one that perhaps needs a different description.
I assign at least ten times as much probability New York as I do Moscow.
What does this mean, could you unpack? What’s “probability of New York”? It’s always something like “probability that I’m now in New York, given that I’m seating in this featureless room”, which discusses possible states of a single world, comparing the possibility that your body is present in New York to same for Moscow. These are not probabilities of the cities themselves. I expect you’d agree and say that of course that doesn’t make sense, but that’s just my point.
I assign at least ten times as much probability New York as I do Moscow.
What does this mean, could you unpack?
It wasn’t my choice of phrase:
just as you won’t compare probability of Moscow with probability of New York
When reading statements like that that are not expressed with mathematical formality the appropriate response seems to be resolving to the meaning that fits best or asking for more specificity. Saying you just can’t do the comparison seems to a wrong answer when you can but there is difficulty resolving ambiguity. For example you say “the answer to A is Y but you technically could have meant B instead of A in which case the answer is Z”.
I actually originally included the ‘what does probability of Moscow mean?’ tangent in the reply but cut it out because it was spammy and actually fit better as a response to the nearby context.
These are not probabilities of the cities themselves. I expect you’d agree and say that of course that doesn’t make sense, but that’s just my point.
Based on the link from the decision theory thread I actually thought you were making a deeper point than that and I was trying to clear a distraction-in-the-details out of the way.
The point I was making is that people do discuss probabilities of different worlds that are not seen as possibilities for some single world. And comparing probabilities of different worlds in themselves seems to be an error for basically the same reason as comparing probabilities of two cities in themselves is an error. I think this is an important error, and realizing it makes a lot of ideas about reasoning in the context of multiple worlds clearly wrong.
God is an exceedingly unlikely property of our branch of the physical world at the present time. Implementations of various ideas of God can be found in other worlds that I don’t know how to compare to our own in a way that’s analogous to “probability”. The Moscow vs. New York example illustrates the difficulty with comparing worlds that are not different hypotheses about how the same world could be, but two distinct objects.
(I don’t privilege the God worlds in particular, the thought experiment where the Moon is actually made out of Gouda is an equivalent example for this purpose.)
The Moscow vs. New York example illustrates the difficulty with comparing worlds that are not different hypotheses about how the same world could be, but two distinct objects.
There doesn’t seem to be a problem here. The comparison resolves to something along the lines of:
Consider all hypotheses about the physical world of the present time which include the object “Moscow”.
Based on all the information you have calculate the probability that any one of those is the correct hypothesis.
Do the same with “New York”.
Compare those two numbers.
???
Profit.
Instantiate ”???” with absurdly contrived bets with Omega as necessary. Rely on the same instantiation to a specific contrived decision to be made to resolve any philosophical issues along the lines of “What does probability mean anyway?” and “What is ‘exist’?”.
What you describe is the interpretation that does make sense. You are looking at properties of possible ways that the single “real world” could be. But if you don’t look at this question specifically in the context of the real world (the single fact possibilities for whose properties you are considering), then Moscow as an abstract idea would have as much strength as Mordor, and “probability of Moscow” in Middle-earth would be comparatively pretty low.
(Probability then characterizes how properties fit into worlds, not how properties in themselves compare to each other, or how worlds compare to each other.)
God is an exceedingly unlikely property of our branch of the physical world at the present time.
Our disagreement here somewhat baffles me, as I think we’ve both updated in good faith and I suspect I only have moderately more/different evidence than you do. If you’d said “somewhat unlikely” rather than “exceedingly unlikely” then I could understand, but as is it seems like something must have gone wrong.
Specifically, unfortunately, there are two things called God; one is the optimal decision theory, one is a god that talks to people and tells them that it’s the optimal decision theory. I can understand why you’d be skeptical of the former even if I don’t share the intuition, but the latter god, the demon who claims to be God, seems to me to likely exist, and if you think that god is exceedingly unlikely then I’m confused why. Like, is that just your naive impression or is it a belief you’re confident in even after reflecting on possible sources of overconfidence, et cetera?
I agree that there are many reasons that prevent us from explicitly exerting significant control, but I’m at least interested in theurgy. Turning yourself into a better institution, contributing only to the support of not-needlessly-suboptimal institutions, etc. In the absence of knowing what “utility function” is going to ultimately decide what justification is for those who care about what the future thinks, I think building better institutions might be a way to improve the probabilities of statistical-computational miracles. I think this with really low probability but it’s not an insane hypothesis even if it is literally magical thinking. (The decision theory and physics backing the intuitions are probably sound, it’s just that it doesn’t have the feel of well-motivatedness yet. It’s more one of those “If I have to choose to spend a few hours either reading about dark matter or reading about where decision theory meets human deciion policies I think it’s a potentially more fruitful idea to think about the latter” things.)
I really appreciate that you responded at roughly the right level of abstraction. It seems clear that the debate should be over the extent to which thaumaturgy is possible (including thaumaturgy that helps you build FAIs faster) because that’s the only way “theism” or “atheism” should affect our decision policy. (Outside of deciding which object level moral principles to pursue. I like traditional Anglican Christianity when it comes to object level morality even if I mostly ignore it.)
The decision theory and physics backing the intuitions are probably sound
Not by a long shot. Physics is probably mostly irrelevant here, it focuses only on our world; and decision theory is so flimsy and poorly understood that any related effort should be spent on improving it, for it’s not even clear what it suggests to be the case, much less how to make use of its suggestions.
I’ve seen QM become important because of decision problems where agents have to coordinate between quantum branches in order to reverse time. I can’t go into that here but I’d at least like to flag that there are decision theory problems where things like quantum information theory shows up.
Physics focuses on worlds across the entire quantum superposition. That’s a pretty big neighborhood, no? Agreed about decision theory. When I said “choose to spend” I meant “I have a few hours to kill but I’m too lazy to do problem sets at the moment”, not “I choose thaumaturgy as the optimal thing to study”.
Physics focuses on worlds across the entire quantum superposition. That’s a pretty big neighborhood, no?
Okay, that makes sense as a rich playground for acausal interaction. I don’t know what pieces of intuition about physics you refer to as useful for reasoning about acausal effects of human decisions though.
(It does add up to atheism, as a modest claim about our own world, the “real world”
Not if there is evidence of angels and demons in our world, and you can interact with them in at least semi-predictably consequential ways. Which basically everyone believes except the goats, because everyone gets evidence except the goats. Doesn’t it suck to have a mind-universe that actively encourages you to fall into self-sustaining delusions? Yes, yes it does.
ETA: Apparently it’s 2012 now! My resolution: not to fall into self-sustaining delusion! Happy new year LW!
Well, of course there are both superintelligences and magical gods out there in the math, including those that watch over you in particular, with conceptual existence that I agree is not fundamentally different from our own, but they are presently irrelevant to us, just as the world where I win the lottery is irrelevant to me, even though a possibility.
It currently seems to me that many of such scenarios are irrelevant not because of “low probability” (as in the lottery case; different abstract facts coexist, so don’t vie for probability mass) or moral irrelevance of any kind (the worlds with nothing possibly of value), but because of other reasons that prevent us from exerting significant consequentialist control over them. The ability to see the possible consequences (and respond to this dependence) is the step missing, even though your actions do control those scenarios, just in a non-consequentialist manner.
(It does add up to atheism, as a modest claim about our own world, the “real world”, that it’s intended to be. In pursuit of “steelmanning” theism you seem to have come up with a strawman atheism...)
I don’t know if this is what Will has in mind- but it seems plausible that the super intelligences and gods that would be watching out for us might attempt to maximize the instantiations of our algorithms that are under their domain, so that as great a proportion of our future selves as possible will be saved (this story is vaguely Leibnizian). But I don’t know that such superbeings would be capable of overcoming their own sheer unlikelihood (though perhaps some subset of such superbeings have infinite capacity to create copies of us?). You can derive a self-interested ethics from this too- if you think you’ll be rewarded or punished by the simulator. The choices of the simulators could be further constrained by simulators above them—we would need an additional step to show that the equilibrium is benevolent (especially given the existence of evil in our universe).
But I’m not at all convinced Tegmark Level 4 isn’t utter nonsense. There is big step from accepting that abstract objects exist to accepting that all possible abstract objects are instantiated. And can we calculate anthropic probabilities from infinities of different magnitudes?
I’d rather say that the so-called “instantiated” objects are no different from the abstract ones, that in reality, there is no fundamental property of being real, there is only a natural category humans use to designate the stuff of normal physics, a definition that can be useful in some cases, but not always.
So there are easy ways to explain this idea at least, right? Humans’ decisions are affected by “counterfactual” futures all the time when planning, and so the counterfactuals have influence, and it’s hard for us to get a notion of existence outside of such influence besides a general naive physicalist one. I guess the not-easy-to-explain parts are about decision theoretic zombies where things seem like they ‘physically exist’ as much as anything else despite exerting less influence, because that clashes more with our naive physicalist intuitions? Not to say that these bizarre philosophical ideas aren’t confused (e.g. maybe because influence is spread around in a more egalitarian way than it naively feels like), but they don’t seem to be confusing as such.
Human decisions are affected by thoughts about counterfactuals. So the question is, what is the nature of the influence that the “content” or “object” of a thought, has on the thought?
I do not believe that when human beings try to think about possible worlds, that these possible worlds have any causal effect in any way on the course of the thinking. The thinking and the causes of the thinking are strictly internal to the “world” in which the thinking occurs. The thinking mind instead engages in an entirely speculative and inferential attempt to guess or feel out the structure of possibillity—but this feeling out does not in any way involve causal contact with other worlds or divergent futures. It is all about an interplay between internally generated partial representations, and a sense of what is possible, impossible, logically necessary, etc in an imagined scenario; but the “sensory input” to these judgments consists of the imagining of possibilities, not the possibilities themselves.
Sure, thats a fine way to put it. But, how do you even begin estimating how likely that is?
How likely what is? There doesn’t appear to be a factual distinction, just what I find to be a more natural way of looking at things, for multiple purposes.
You don’t think whether or not the Tegmark Level 4 multiverse exists could ever have any decision theoretic import?
I believe that “exists” doesn’t mean anything fundamentally significant (in senses other than referring to presence of a property of some fact; or referring to the physical world; or its technical meanings in logic), so I don’t understand what it would mean for various (abstract) things to exist to greater or lower extent.
Okay. What is your probability for that belief? (Not that I expect a number, but surely you can’t be certain.)
That would require understanding alternatives, which I currently don’t. The belief in question is mostly asserting confusion, and as such it isn’t much use, other than as a starting point that doesn’t purport to explain what I don’t understand.
Fine. So you agree that we should be wary of any hypotheses of which the reality of abstract objects is a part?
No, I won’t see that in itself as a reason to be wary, since as I said repeatedly I don’t know how to parse the property of something being real in this sense.
Personally, I am always wary of hypotheses I don’t know how to parse.
Anyone who has positive accounts of existentness to put forth, I’d like to hear them. (E.g., Eliezer has talked about this related existentness-like-thing that has do with being in a causal graph (being computed), but I’m not sure if that’s just physicalist intuition admitting much confusion or if it’s supposed to be serious theoretical speculation caused by interesting underlying motivations that weren’t made explicit.)
Different abstract facts aren’t mutually exclusive, so one can’t compare them by “probability”, just as you won’t compare probability of Moscow with probability of New York. It seems to make sense to ask about probability of various facts being a certain way (in certain mutually exclusive possible states), or about probability of joint facts (that is, dependencies between facts) being a certain way, but it doesn’t seem to me that asking about probabilities of different facts in themselves is a sensible idea.
(Universal prior, for example, can be applied to talk about the joint probability distribution over the possible states of a particular sequence of past and future observations, that describes a single fact of the history of observations by one agent.)
(I’m not sure ‘compare’ is the right word here.)
You just prompted me to make that comparison. I’ve been to New York. I haven’t been to Moscow. I’ve also met more people who have talked about what they do in New York than I have people who talk about Moscow. I assign at least ten times as much confidence to New York as I do Moscow. Both those probabilities happen to be well above 99%. I don’t see any problem with comparing them just so long as I don’t conclude anything stupid based on that comparison.
There’s a point behind what you are saying here—and an important point at that—just one that perhaps needs a different description.
What does this mean, could you unpack? What’s “probability of New York”? It’s always something like “probability that I’m now in New York, given that I’m seating in this featureless room”, which discusses possible states of a single world, comparing the possibility that your body is present in New York to same for Moscow. These are not probabilities of the cities themselves. I expect you’d agree and say that of course that doesn’t make sense, but that’s just my point.
It wasn’t my choice of phrase:
When reading statements like that that are not expressed with mathematical formality the appropriate response seems to be resolving to the meaning that fits best or asking for more specificity. Saying you just can’t do the comparison seems to a wrong answer when you can but there is difficulty resolving ambiguity. For example you say “the answer to A is Y but you technically could have meant B instead of A in which case the answer is Z”.
I actually originally included the ‘what does probability of Moscow mean?’ tangent in the reply but cut it out because it was spammy and actually fit better as a response to the nearby context.
Based on the link from the decision theory thread I actually thought you were making a deeper point than that and I was trying to clear a distraction-in-the-details out of the way.
The point I was making is that people do discuss probabilities of different worlds that are not seen as possibilities for some single world. And comparing probabilities of different worlds in themselves seems to be an error for basically the same reason as comparing probabilities of two cities in themselves is an error. I think this is an important error, and realizing it makes a lot of ideas about reasoning in the context of multiple worlds clearly wrong.
log-odds
Oh, yes, that. Thankyou.
Really? God isn’t less probable than New York?
God is an exceedingly unlikely property of our branch of the physical world at the present time. Implementations of various ideas of God can be found in other worlds that I don’t know how to compare to our own in a way that’s analogous to “probability”. The Moscow vs. New York example illustrates the difficulty with comparing worlds that are not different hypotheses about how the same world could be, but two distinct objects.
(I don’t privilege the God worlds in particular, the thought experiment where the Moon is actually made out of Gouda is an equivalent example for this purpose.)
There doesn’t seem to be a problem here. The comparison resolves to something along the lines of:
Consider all hypotheses about the physical world of the present time which include the object “Moscow”.
Based on all the information you have calculate the probability that any one of those is the correct hypothesis.
Do the same with “New York”.
Compare those two numbers.
???
Profit.
Instantiate ”???” with absurdly contrived bets with Omega as necessary. Rely on the same instantiation to a specific contrived decision to be made to resolve any philosophical issues along the lines of “What does probability mean anyway?” and “What is ‘exist’?”.
What you describe is the interpretation that does make sense. You are looking at properties of possible ways that the single “real world” could be. But if you don’t look at this question specifically in the context of the real world (the single fact possibilities for whose properties you are considering), then Moscow as an abstract idea would have as much strength as Mordor, and “probability of Moscow” in Middle-earth would be comparatively pretty low.
(Probability then characterizes how properties fit into worlds, not how properties in themselves compare to each other, or how worlds compare to each other.)
Our disagreement here somewhat baffles me, as I think we’ve both updated in good faith and I suspect I only have moderately more/different evidence than you do. If you’d said “somewhat unlikely” rather than “exceedingly unlikely” then I could understand, but as is it seems like something must have gone wrong.
Specifically, unfortunately, there are two things called God; one is the optimal decision theory, one is a god that talks to people and tells them that it’s the optimal decision theory. I can understand why you’d be skeptical of the former even if I don’t share the intuition, but the latter god, the demon who claims to be God, seems to me to likely exist, and if you think that god is exceedingly unlikely then I’m confused why. Like, is that just your naive impression or is it a belief you’re confident in even after reflecting on possible sources of overconfidence, et cetera?
I agree that there are many reasons that prevent us from explicitly exerting significant control, but I’m at least interested in theurgy. Turning yourself into a better institution, contributing only to the support of not-needlessly-suboptimal institutions, etc. In the absence of knowing what “utility function” is going to ultimately decide what justification is for those who care about what the future thinks, I think building better institutions might be a way to improve the probabilities of statistical-computational miracles. I think this with really low probability but it’s not an insane hypothesis even if it is literally magical thinking. (The decision theory and physics backing the intuitions are probably sound, it’s just that it doesn’t have the feel of well-motivatedness yet. It’s more one of those “If I have to choose to spend a few hours either reading about dark matter or reading about where decision theory meets human deciion policies I think it’s a potentially more fruitful idea to think about the latter” things.)
I really appreciate that you responded at roughly the right level of abstraction. It seems clear that the debate should be over the extent to which thaumaturgy is possible (including thaumaturgy that helps you build FAIs faster) because that’s the only way “theism” or “atheism” should affect our decision policy. (Outside of deciding which object level moral principles to pursue. I like traditional Anglican Christianity when it comes to object level morality even if I mostly ignore it.)
Not by a long shot. Physics is probably mostly irrelevant here, it focuses only on our world; and decision theory is so flimsy and poorly understood that any related effort should be spent on improving it, for it’s not even clear what it suggests to be the case, much less how to make use of its suggestions.
I’ve seen QM become important because of decision problems where agents have to coordinate between quantum branches in order to reverse time. I can’t go into that here but I’d at least like to flag that there are decision theory problems where things like quantum information theory shows up.
That actually sounds like it has a possibility of being interesting.
Physics focuses on worlds across the entire quantum superposition. That’s a pretty big neighborhood, no? Agreed about decision theory. When I said “choose to spend” I meant “I have a few hours to kill but I’m too lazy to do problem sets at the moment”, not “I choose thaumaturgy as the optimal thing to study”.
Okay, that makes sense as a rich playground for acausal interaction. I don’t know what pieces of intuition about physics you refer to as useful for reasoning about acausal effects of human decisions though.
Not if there is evidence of angels and demons in our world, and you can interact with them in at least semi-predictably consequential ways. Which basically everyone believes except the goats, because everyone gets evidence except the goats. Doesn’t it suck to have a mind-universe that actively encourages you to fall into self-sustaining delusions? Yes, yes it does.
ETA: Apparently it’s 2012 now! My resolution: not to fall into self-sustaining delusion! Happy new year LW!