Critch proved a bounded version of Lob’s theorem and a related thing where two bounded agents with open source code can prove things about each other’s source code. The significance of the agents being bounded, is that (if I recall the contents of the paper correctly, which I may plausibly not have) they can often prove things about the other agent’s decision algorithm (and thus coordinate) in much less time than it would take to exhaustively compute the other agents decision.
I guess it makes sense, given enough assumptions. There’s a multiverse; in some fraction of universes there are intelligences which figure out the correct theory of the multiverse; some fraction of those intelligences come up with the idea of acausally coordinating with intelligences in other universes, via a shared model of the multiverse, and are motivated to do so; and then the various island populations of intelligences who are motivated to attempt such a thing, try to reason about each other’s reasoning, and act accordingly.
I suppose it deserves its place in the spectrum of arcane possibilities that receive some attention. But I would still like to see someone model this at the “multiverse level”. Using the language of programs: if we consider some set of programs that *hasn’t* been selected precisely so that they will engage in acausal coordination—perhaps the set of *all* well-formed programs in some very simple programming language—what are the prospects for the existence of nontrivial acausal trade networks? They may be very rare, they may be vastly outnumbered by programs which made a modeling error and are “trading” with nonexistent partners, and so on.
Critch proved a bounded version of Lob’s theorem and a related thing where two bounded agents with open source code can prove things about each other’s source code. The significance of the agents being bounded, is that (if I recall the contents of the paper correctly, which I may plausibly not have) they can often prove things about the other agent’s decision algorithm (and thus coordinate) in much less time than it would take to exhaustively compute the other agents decision.
I guess it makes sense, given enough assumptions. There’s a multiverse; in some fraction of universes there are intelligences which figure out the correct theory of the multiverse; some fraction of those intelligences come up with the idea of acausally coordinating with intelligences in other universes, via a shared model of the multiverse, and are motivated to do so; and then the various island populations of intelligences who are motivated to attempt such a thing, try to reason about each other’s reasoning, and act accordingly.
I suppose it deserves its place in the spectrum of arcane possibilities that receive some attention. But I would still like to see someone model this at the “multiverse level”. Using the language of programs: if we consider some set of programs that *hasn’t* been selected precisely so that they will engage in acausal coordination—perhaps the set of *all* well-formed programs in some very simple programming language—what are the prospects for the existence of nontrivial acausal trade networks? They may be very rare, they may be vastly outnumbered by programs which made a modeling error and are “trading” with nonexistent partners, and so on.