Consider a different problem: a group of people are posed some technical or mathematical challenge. Each individual person is given a different subset of the information about the problem, and each person knows what type of information every other participant gets.
Trivial example: you’re supposed to find the volume of a pyramid, you (participant 1) are given its height and the apex angles for two triangular faces, participant 2 is given the radius of the sphere on which all of the pyramid’s vertices lie and all angles of the triangular faces, participant 3 is given the areas of all faces, et cetera.
Given this setup, if you’re skilled at geometry, you can likely figure out which of the participants can solve the problem exactly, which can only put upper and lower bounds on the volume, and what those upper/lower bounds are for each participant. You don’t need to model your competitors’ mental states: all you need to do is reason about the object-level domain, plus take into account what information they have. No infinite recursion happens, because you can abstract out the particulars of how others’ minds work.
This works assuming that everyone involved is perfectly skilled at geometry: that you don’t need to predict what mistakes the others would make (which would depend on the messy details of their minds).
Speculatively, this would apply to deception as well. You don’t necessarily need to model others’ brain states directly. If they’re all perfectly skilled at deception, you can predict what deceptions they’d try to use and how effective they’d be based on purely objective information: the sociopolitical landscape, their individual skills and comparative advantages, et cetera. You can “skip to the end”: predict everyone playing their best-move-in-circumstances-where-everyone-else-plays-their-best-move-too.
Objectively, the distribution of comparative advantages is likely very different, so even if everyone makes their best move, some would hopelessly lose. (E. g., imagine if one of the experts is a close friend of a government official and the other is a controversial figure who’d been previously judged guilty of fraud.)
Speculatively, similar works for the MUP stuff. You don’t actually need to model the individual details of other universes. You can just use abstract reasoning to figure out what kinds of universes are dense across Tegmark IV, figure out what (distributions over) entities inhabit them, figure out (distributions over) how they’d reason, and what (distributions over) simulations they’d run, and to what (distribution over the) output this process converges given the objective material constraints involved. Then take actions that skew said distribution-over-the-output in a way you want.
Again, this is speculative: I don’t know that there are any math proofs that this is possible. But it seems plausible enough that something-like-this might work, and my understanding is that the MUP argument (and other kinds of acausal-trade setups) indeed uses this as a foundational assumption. (I. e., it assumes that the problem is isomorphic (in a relevant sense) to my pyramid challenge above.)
(IIRC, the Acausal Normalcy post outlines some of the relevant insights, though I think it doesn’t precisely focus on the topic at hand.)
Consider a different problem: a group of people are posed some technical or mathematical challenge. Each individual person is given a different subset of the information about the problem, and each person knows what type of information every other participant gets.
Trivial example: you’re supposed to find the volume of a pyramid, you (participant 1) are given its height and the apex angles for two triangular faces, participant 2 is given the radius of the sphere on which all of the pyramid’s vertices lie and all angles of the triangular faces, participant 3 is given the areas of all faces, et cetera.
Given this setup, if you’re skilled at geometry, you can likely figure out which of the participants can solve the problem exactly, which can only put upper and lower bounds on the volume, and what those upper/lower bounds are for each participant. You don’t need to model your competitors’ mental states: all you need to do is reason about the object-level domain, plus take into account what information they have. No infinite recursion happens, because you can abstract out the particulars of how others’ minds work.
This works assuming that everyone involved is perfectly skilled at geometry: that you don’t need to predict what mistakes the others would make (which would depend on the messy details of their minds).
Speculatively, this would apply to deception as well. You don’t necessarily need to model others’ brain states directly. If they’re all perfectly skilled at deception, you can predict what deceptions they’d try to use and how effective they’d be based on purely objective information: the sociopolitical landscape, their individual skills and comparative advantages, et cetera. You can “skip to the end”: predict everyone playing their best-move-in-circumstances-where-everyone-else-plays-their-best-move-too.
Objectively, the distribution of comparative advantages is likely very different, so even if everyone makes their best move, some would hopelessly lose. (E. g., imagine if one of the experts is a close friend of a government official and the other is a controversial figure who’d been previously judged guilty of fraud.)
Speculatively, similar works for the MUP stuff. You don’t actually need to model the individual details of other universes. You can just use abstract reasoning to figure out what kinds of universes are dense across Tegmark IV, figure out what (distributions over) entities inhabit them, figure out (distributions over) how they’d reason, and what (distributions over) simulations they’d run, and to what (distribution over the) output this process converges given the objective material constraints involved. Then take actions that skew said distribution-over-the-output in a way you want.
Again, this is speculative: I don’t know that there are any math proofs that this is possible. But it seems plausible enough that something-like-this might work, and my understanding is that the MUP argument (and other kinds of acausal-trade setups) indeed uses this as a foundational assumption. (I. e., it assumes that the problem is isomorphic (in a relevant sense) to my pyramid challenge above.)
(IIRC, the Acausal Normalcy post outlines some of the relevant insights, though I think it doesn’t precisely focus on the topic at hand.)