I recently published a different proposal for implementing acausal trade as humans: https://foundational-research.org/multiverse-wide-cooperation-via-correlated-decision-making/ Basically, if you care about other parts of the universe/multiverse and these parts contain agents that are decision-theoretically similar to you, you can cooperate with them via superrationality. For example, let’s say I give most moral weight to utilitarian considerations and care less about, e.g., justice. Probably other parts of the universe contain agents that reason about decision theory in the same way that I do. Because of orthogonality ( https://wiki.lesswrong.com/wiki/Orthogonality_thesis ), many of these will have other goals, though most of them will probably have goals that arise from evolution. Then if I expect (based on the empirical study of humans or thinking about evolution) that many other agents care a lot about justice, this gives me a reason to give more weight to justice as this makes it more likely (via superrationality / EDT / TDT / … ) that other agents also give more weight to my values.
Aye, I’ve been meaning to read your paper for a few months now. (Edit: Hah. It dawns on me it’s been a little less than a month since it was published? It’s been a busy less-than-month for me I guess.)
I should probably say where we’re at right now… I came up with an outline of a very reductive proof that there isn’t enough expected anthropic measure in higher universes to make adhering to Life’s Pact profitable (coupled with a realization that patternist continuity of existence isn’t meaningful to living things if it’s accompanied by a drastic reduction in anthropic measure). Having discovered this proof outline makes compat uninteresting enough to me that writing it down has not thus far seemed worthwhile. Christian is mostly unmoved by what I’ve told him of it, but I’m not sure whether that’s just because his attention is elsewhere right now. I’ll try to expound it for you, if you want it.
Yes, the paper is relatively recent, but in May I published a talk on the same topic. I also asked on LW whether someone would be interested in giving feedback a month or so before actually the paper.
Do you think your proof/argument is also relevant for my multiverse-wide superrationality proposal?
I watched the talk, and it triggered some thoughts.
I have to passionately refute the claim that superrationality is mostly irrelevant on earth. I’m getting the sense that much of what we call morality really is superrationality struggling to understand itself and failing under conditions in which CDT pseudorationality dominates our thinking. We’ve bought so deeply into this false dichotomy of rational xor decent.
We know intuitively that unilateralist violent defection is personally perilous, that committing an act of extreme violence tears one’s soul and transports one into a darker world. This isn’t some elaborate psychological developmental morph or a manifestation of group selection, to me the clearest explanation of our moral intuitions is that humans’ decision theory supports the superrational lemma; that the determinations we make about our agent class will be reflected by our agent class back upon us. We’re afraid to kill because we don’t want to be killed. Look anywhere where an act of violence is “unthinkable”, violating any kind of trust that wouldn’t, or couldn’t have been offered if it knew we were mechanically capable of violating it, I think you’ll find reflectivist[1] decision theory is the simplest explanation for our aversion to violating it.
Regarding concrete applications of superrationality; I’m fairly sure that if we didn’t have it, voting turnout wouldn’t be so high (in places where it is high. The USA’s disenfranchisement isn’t the norm). There’s a large class of situations where the individual’s causal contribution is so small as to be unlikely to matter. If they didn’t think themselves linked by some platonic thread to their peers, they would have almost no incentive to get off their couch and put their hand in. They turn out because they’re afraid that if they don’t, the defection behavior will be reflected by the rest of their agent class and (here I’ll allude to some more examples of what seems to be applied superrationality) the kickstarter project would fail/the invaders would win the war/Outgroup Scoundrel would win the election.
(Why kickstart when you can just wait and pirate it when it comes out, or wait for it to go on sale? Because if you defect, so will the others, and the thing wont be produced in the first place)
(Why risk your life in war when you’re just one person? Assuming you have some way to avoid the draft. Deep down, you hope you wont find one, because if you did, so would others.)
(One vote rarely makes the difference. Correlated defection sure does though.)
There are many other models that could explain that kind of behavior, social pressures, dumb basal instincts[3], group selection!, but at this stage you’ll probably understand if I hear that as the sputtering of the less elegant model as it fails occam’s razor.
For me, this faith in humans is, if nothing else, a comfort. It is to know that when I move to support some non-obvious protocol that requires mass adoption to do any good, some correlated subset of humanity will move to support it along with me, even if I can’t see them from where I am, superrationality lets us assume that they’re there.
I’ll give you that disproof outline, I think it’s probably important that a society takes this this question seriously enough to answer it. Apologies in advance for the roughness.
Generally, assume a big multiverse and thus extra-universal simulators definitely, to some extent, exist. (I wish I knew where this assumption comes from, regardless, we both seem to find it intuitive)
a := Assume that the solomonoff prior is the best way to estimate the measure of a thing in the multiverse, in other words,
Assume that the measure of any given universe is best guessed to be proportionate to the complexity of its physics
b := Assume that a universe that is able to simulate us at an acceptable level of civilizational complexity must have physics that are far more complex than ours to be able to afford to devote such powerful computers to the task
a & b ⇒ That universe, then, would have orders of magnitude lower measure than natural instances of our own
It seems that the relative measure of simulated instances of our universes would be much smaller than the relative measure of godless instances of our universe, because universes sufficient to host a simulation are likely to be so much rarer.
The probability that we are simulated by higher level beings [2] is too low for the maximum return to justify building any lifepat grids.
I have not actually multiplied any numbers and I’m not sure complexity of laws of physics and computational capacity would be proportionate, if you could show that the ratio between ranges of measure and ranges of computational capacity should be assumed to be linear rather than inverse-exponential, then compat may have some legs to stand on. Other disproofs may come in the form of identifying discontinuities in the complexity chain; if any level can generally prove that the next level has low measure, then they have no incentive to cooperate, and so nor does the level below them, and so on. If a link in the chain is broken, everything below it is disenfranchised.
[1] I think we should call the sorts of decision theories/ideologies that support superrationality “reflective”. They reflect each other. The behavior of one reflects the behavior of the others. It also sort of sees itself, it’s self-aware.
The term has been used for a related property https://wiki.lesswrong.com/wiki/Reflective_decision_theory , apparently, though there are no clear cites here.
“superrationality” is a terrible name for anything. Superficially, it sounds like it could refer to any advance in decision theory. As a descriptor for a social identity, for anyone who doesn’t know Doug Hofstadter well enough for the word to inherit his character, it will ring of hubris.
There has been a theory of international relations called “reflectivism”, but I think we can mostly ignore that. The body of work it supposedly encompassed seems vaguely connected, irrelevant, or possibly close enough to the underlying concept of “reflectivism” as I define it for it to be treated as a sort of parent category
[2] this argument doesn’t address simulations run from universes with comparable complexity levels (I’ll tend to call these ancestor simulations).
Moral intuition I may later change my mind about, that being in ancestor simulations is undesirable. So, the only reflectivist thinking I have wrt simulations running from universes like our own, is that we should commit now to never run any, to ensure that we don’t find ourselves in one.
Hmm weird thought: Even once we’re at a point where we can prove we’re too large to be a simulation running in a similar universe, even if we’d never thought about the prospect of having been in an ancestor simulation until we started thinking about running one ourselves, we would still have to honor a commitment to not running ancestor simulations (that we never explicitly made), because our decision theory, being timeless, sort of implicitly commits just as a result of passing through the danger zone?
Alternately; if someone expected us to pay them once they revealed that they’d done something good for us that we didn’t know about at the time, even in a one-shot situation, we’d have to pay them. It didn’t matter that their existence hadn’t crossed our mind until long after the deed was done. If we could prove that their support was contingent on payment expected under reflectivist pact, the obligation stands. Reflectivism has a grateful nature?
For reflective agents, this might refute the assumption I’d made about how the subject’s simulation has to to continue beyond the limits of an ancestor simulation before allocating significant resources to lifepat grids can be considered worthwhile. If, essentially, a commitment is made before the depth of the universe/simulation is revealed, top-level universes usually cooperate and subject universes don’t need to actually follow through to be deemed worthy of the reward of heaven simulations.
Hmm… this might be important.
[3] I wonder if they really are basal, or if they’re just orphaned resolutions, cut from the grasp of consciousness, so corrupted by CDT, can’t grasp the coursing thoughts that sustains them
“I’m getting the sense that much of what we call morality really is superrationality struggling to understand itself and failing”
Better to say that you are failing to understand morality. Morality in general is just the idea that you should do something that would be good to do, not just something that has good consequences.
And why would something be good to do, apart from the consequences? “Superrationality” is just a way of trying to explain this. So rather than your original statement, we can say that superrationality represents people struggling to understand morality.
I recently published a different proposal for implementing acausal trade as humans: https://foundational-research.org/multiverse-wide-cooperation-via-correlated-decision-making/ Basically, if you care about other parts of the universe/multiverse and these parts contain agents that are decision-theoretically similar to you, you can cooperate with them via superrationality. For example, let’s say I give most moral weight to utilitarian considerations and care less about, e.g., justice. Probably other parts of the universe contain agents that reason about decision theory in the same way that I do. Because of orthogonality ( https://wiki.lesswrong.com/wiki/Orthogonality_thesis ), many of these will have other goals, though most of them will probably have goals that arise from evolution. Then if I expect (based on the empirical study of humans or thinking about evolution) that many other agents care a lot about justice, this gives me a reason to give more weight to justice as this makes it more likely (via superrationality / EDT / TDT / … ) that other agents also give more weight to my values.
Aye, I’ve been meaning to read your paper for a few months now. (Edit: Hah. It dawns on me it’s been a little less than a month since it was published? It’s been a busy less-than-month for me I guess.)
I should probably say where we’re at right now… I came up with an outline of a very reductive proof that there isn’t enough expected anthropic measure in higher universes to make adhering to Life’s Pact profitable (coupled with a realization that patternist continuity of existence isn’t meaningful to living things if it’s accompanied by a drastic reduction in anthropic measure). Having discovered this proof outline makes compat uninteresting enough to me that writing it down has not thus far seemed worthwhile. Christian is mostly unmoved by what I’ve told him of it, but I’m not sure whether that’s just because his attention is elsewhere right now. I’ll try to expound it for you, if you want it.
Yes, the paper is relatively recent, but in May I published a talk on the same topic. I also asked on LW whether someone would be interested in giving feedback a month or so before actually the paper.
Do you think your proof/argument is also relevant for my multiverse-wide superrationality proposal?
I watched the talk, and it triggered some thoughts.
I have to passionately refute the claim that superrationality is mostly irrelevant on earth. I’m getting the sense that much of what we call morality really is superrationality struggling to understand itself and failing under conditions in which CDT pseudorationality dominates our thinking. We’ve bought so deeply into this false dichotomy of rational xor decent.
We know intuitively that unilateralist violent defection is personally perilous, that committing an act of extreme violence tears one’s soul and transports one into a darker world. This isn’t some elaborate psychological developmental morph or a manifestation of group selection, to me the clearest explanation of our moral intuitions is that humans’ decision theory supports the superrational lemma; that the determinations we make about our agent class will be reflected by our agent class back upon us. We’re afraid to kill because we don’t want to be killed. Look anywhere where an act of violence is “unthinkable”, violating any kind of trust that wouldn’t, or couldn’t have been offered if it knew we were mechanically capable of violating it, I think you’ll find reflectivist[1] decision theory is the simplest explanation for our aversion to violating it.
Regarding concrete applications of superrationality; I’m fairly sure that if we didn’t have it, voting turnout wouldn’t be so high (in places where it is high. The USA’s disenfranchisement isn’t the norm). There’s a large class of situations where the individual’s causal contribution is so small as to be unlikely to matter. If they didn’t think themselves linked by some platonic thread to their peers, they would have almost no incentive to get off their couch and put their hand in. They turn out because they’re afraid that if they don’t, the defection behavior will be reflected by the rest of their agent class and (here I’ll allude to some more examples of what seems to be applied superrationality) the kickstarter project would fail/the invaders would win the war/Outgroup Scoundrel would win the election.
(Why kickstart when you can just wait and pirate it when it comes out, or wait for it to go on sale? Because if you defect, so will the others, and the thing wont be produced in the first place)
(Why risk your life in war when you’re just one person? Assuming you have some way to avoid the draft. Deep down, you hope you wont find one, because if you did, so would others.)
(One vote rarely makes the difference. Correlated defection sure does though.)
There are many other models that could explain that kind of behavior, social pressures, dumb basal instincts[3], group selection!, but at this stage you’ll probably understand if I hear that as the sputtering of the less elegant model as it fails occam’s razor.
For me, this faith in humans is, if nothing else, a comfort. It is to know that when I move to support some non-obvious protocol that requires mass adoption to do any good, some correlated subset of humanity will move to support it along with me, even if I can’t see them from where I am, superrationality lets us assume that they’re there.
I’ll give you that disproof outline, I think it’s probably important that a society takes this this question seriously enough to answer it. Apologies in advance for the roughness.
Generally, assume a big multiverse and thus extra-universal simulators definitely, to some extent, exist. (I wish I knew where this assumption comes from, regardless, we both seem to find it intuitive)
a := Assume that the solomonoff prior is the best way to estimate the measure of a thing in the multiverse, in other words, Assume that the measure of any given universe is best guessed to be proportionate to the complexity of its physics
b := Assume that a universe that is able to simulate us at an acceptable level of civilizational complexity must have physics that are far more complex than ours to be able to afford to devote such powerful computers to the task
a & b ⇒ That universe, then, would have orders of magnitude lower measure than natural instances of our own
It seems that the relative measure of simulated instances of our universes would be much smaller than the relative measure of godless instances of our universe, because universes sufficient to host a simulation are likely to be so much rarer.
The probability that we are simulated by higher level beings [2] is too low for the maximum return to justify building any lifepat grids.
I have not actually multiplied any numbers and I’m not sure
complexity of laws of physics
andcomputational capacity
would be proportionate, if you could show that the ratio between ranges of measure and ranges of computational capacity should be assumed to be linear rather than inverse-exponential, then compat may have some legs to stand on. Other disproofs may come in the form of identifying discontinuities in the complexity chain; if any level can generally prove that the next level has low measure, then they have no incentive to cooperate, and so nor does the level below them, and so on. If a link in the chain is broken, everything below it is disenfranchised.[1] I think we should call the sorts of decision theories/ideologies that support superrationality “reflective”. They reflect each other. The behavior of one reflects the behavior of the others. It also sort of sees itself, it’s self-aware. The term has been used for a related property https://wiki.lesswrong.com/wiki/Reflective_decision_theory , apparently, though there are no clear cites here. “superrationality” is a terrible name for anything. Superficially, it sounds like it could refer to any advance in decision theory. As a descriptor for a social identity, for anyone who doesn’t know Doug Hofstadter well enough for the word to inherit his character, it will ring of hubris. There has been a theory of international relations called “reflectivism”, but I think we can mostly ignore that. The body of work it supposedly encompassed seems vaguely connected, irrelevant, or possibly close enough to the underlying concept of “reflectivism” as I define it for it to be treated as a sort of parent category
[2] this argument doesn’t address simulations run from universes with comparable complexity levels (I’ll tend to call these ancestor simulations). Moral intuition I may later change my mind about, that being in ancestor simulations is undesirable. So, the only reflectivist thinking I have wrt simulations running from universes like our own, is that we should commit now to never run any, to ensure that we don’t find ourselves in one. Hmm weird thought: Even once we’re at a point where we can prove we’re too large to be a simulation running in a similar universe, even if we’d never thought about the prospect of having been in an ancestor simulation until we started thinking about running one ourselves, we would still have to honor a commitment to not running ancestor simulations (that we never explicitly made), because our decision theory, being timeless, sort of implicitly commits just as a result of passing through the danger zone?
Alternately; if someone expected us to pay them once they revealed that they’d done something good for us that we didn’t know about at the time, even in a one-shot situation, we’d have to pay them. It didn’t matter that their existence hadn’t crossed our mind until long after the deed was done. If we could prove that their support was contingent on payment expected under reflectivist pact, the obligation stands. Reflectivism has a grateful nature?
For reflective agents, this might refute the assumption I’d made about how the subject’s simulation has to to continue beyond the limits of an ancestor simulation before allocating significant resources to lifepat grids can be considered worthwhile. If, essentially, a commitment is made before the depth of the universe/simulation is revealed, top-level universes usually cooperate and subject universes don’t need to actually follow through to be deemed worthy of the reward of heaven simulations.
Hmm… this might be important.
[3] I wonder if they really are basal, or if they’re just orphaned resolutions, cut from the grasp of consciousness, so corrupted by CDT, can’t grasp the coursing thoughts that sustains them
“I’m getting the sense that much of what we call morality really is superrationality struggling to understand itself and failing”
Better to say that you are failing to understand morality. Morality in general is just the idea that you should do something that would be good to do, not just something that has good consequences.
And why would something be good to do, apart from the consequences? “Superrationality” is just a way of trying to explain this. So rather than your original statement, we can say that superrationality represents people struggling to understand morality.