I was under the impression that there are many programs that want to manipulate our world, who can engage in acausal trade with each other to coordinate and act as a unified entity (which may have a net weight comparable to the simpler programs that are just correct).
Has anyone ever actually presented an argument for such propositions? Like describing an ensemble of toy possible worlds in which even attempting “acausal trade” is rational, let alone one in which these acausal coalitions of acausal traders exist?
It might makes some sense to identify with all your subjective duplicates throughout the (hypothetical) multiverse, on the grounds that some fraction of them will engage in the same decision process, so that how you decide here is actually how a whole sub-ensemble of “you”s will decide.
But acausal trade, as I understand it, involves simulating a hypothetical other entity, who by hypothesis is simulating *you* in their possible world, so as to artificially create a situation in which two ensemble-identified entities can interact with each other.
I mean… Do you, in this world, have to simulate not just the other entity, but also simulate its simulation of you?? So that there is now a simulation of you in *this* world? Or is that a detail you can leave out? Or do you, the original you, roleplay the simulation? Someone show me a version of this that actually makes sense.
Critch proved a bounded version of Lob’s theorem and a related thing where two bounded agents with open source code can prove things about each other’s source code. The significance of the agents being bounded, is that (if I recall the contents of the paper correctly, which I may plausibly not have) they can often prove things about the other agent’s decision algorithm (and thus coordinate) in much less time than it would take to exhaustively compute the other agents decision.
I guess it makes sense, given enough assumptions. There’s a multiverse; in some fraction of universes there are intelligences which figure out the correct theory of the multiverse; some fraction of those intelligences come up with the idea of acausally coordinating with intelligences in other universes, via a shared model of the multiverse, and are motivated to do so; and then the various island populations of intelligences who are motivated to attempt such a thing, try to reason about each other’s reasoning, and act accordingly.
I suppose it deserves its place in the spectrum of arcane possibilities that receive some attention. But I would still like to see someone model this at the “multiverse level”. Using the language of programs: if we consider some set of programs that *hasn’t* been selected precisely so that they will engage in acausal coordination—perhaps the set of *all* well-formed programs in some very simple programming language—what are the prospects for the existence of nontrivial acausal trade networks? They may be very rare, they may be vastly outnumbered by programs which made a modeling error and are “trading” with nonexistent partners, and so on.
As far as I remember, a large constant fraction of the weight of any string comes from the single shortest program predicting that string, so many programs cooperating shouldn’t matter much.
I was under the impression that there are many programs that want to manipulate our world, who can engage in acausal trade with each other to coordinate and act as a unified entity (which may have a net weight comparable to the simpler programs that are just correct).
Has anyone ever actually presented an argument for such propositions? Like describing an ensemble of toy possible worlds in which even attempting “acausal trade” is rational, let alone one in which these acausal coalitions of acausal traders exist?
It might makes some sense to identify with all your subjective duplicates throughout the (hypothetical) multiverse, on the grounds that some fraction of them will engage in the same decision process, so that how you decide here is actually how a whole sub-ensemble of “you”s will decide.
But acausal trade, as I understand it, involves simulating a hypothetical other entity, who by hypothesis is simulating *you* in their possible world, so as to artificially create a situation in which two ensemble-identified entities can interact with each other.
I mean… Do you, in this world, have to simulate not just the other entity, but also simulate its simulation of you?? So that there is now a simulation of you in *this* world? Or is that a detail you can leave out? Or do you, the original you, roleplay the simulation? Someone show me a version of this that actually makes sense.
Critch proved a bounded version of Lob’s theorem and a related thing where two bounded agents with open source code can prove things about each other’s source code. The significance of the agents being bounded, is that (if I recall the contents of the paper correctly, which I may plausibly not have) they can often prove things about the other agent’s decision algorithm (and thus coordinate) in much less time than it would take to exhaustively compute the other agents decision.
I guess it makes sense, given enough assumptions. There’s a multiverse; in some fraction of universes there are intelligences which figure out the correct theory of the multiverse; some fraction of those intelligences come up with the idea of acausally coordinating with intelligences in other universes, via a shared model of the multiverse, and are motivated to do so; and then the various island populations of intelligences who are motivated to attempt such a thing, try to reason about each other’s reasoning, and act accordingly.
I suppose it deserves its place in the spectrum of arcane possibilities that receive some attention. But I would still like to see someone model this at the “multiverse level”. Using the language of programs: if we consider some set of programs that *hasn’t* been selected precisely so that they will engage in acausal coordination—perhaps the set of *all* well-formed programs in some very simple programming language—what are the prospects for the existence of nontrivial acausal trade networks? They may be very rare, they may be vastly outnumbered by programs which made a modeling error and are “trading” with nonexistent partners, and so on.
As far as I remember, a large constant fraction of the weight of any string comes from the single shortest program predicting that string, so many programs cooperating shouldn’t matter much.