Suppose that there is some search process that is looking through a collection of things, and you are an element of the collection. Then, in general, it’s difficult to imagine how you (just you) can reason about the whole search in such a way as to “steer it around” in your preferred direction.
I think this is easy to imagine. I’m an expert who is among 10 experts recruited to advise some government on making a decision. I can guess some of the signals that the government will use to choose who among us to trust most. I can guess some of the relative weaknesses of fellow experts. I can try to use this to manipulate the government into taking my opinion more seriously. I don’t need to create a clone government and hire 10 expert clones in order to do this.
The other 9 experts can also make guesses about which the signals the government will use and what the relative weaknesses of their fellow experts are, and the other 9 experts can also act on those guesses. So in order to reason about what the outcome of the search will be, you have to reason about both yourself and also about the other 9 experts, unless you somehow know that you are much better than the other 9 experts at steering the outcome of the search as a whole. But in that case only you can steer the search . The other 9 experts would fail if they tried to use the same strategy you’re using.
unless you somehow know that you are much better than the other 9 experts at steering the outcome of the search as a whole. But in that case only you can steer the search . The other 9 experts would fail if they tried to use the same strategy you’re using.
Okay if you accept this modified scenario where one expert knows they are much better than the other 9, then this is sufficient as a scenario that nostalgebraist claimed was difficult to imagine. So that’s enough to prove the point I was trying to make.
But the original example works too. It’s just a simultaneous move game. It’ll be won by whichever player is best at playing the game. It’s clearly possible to play the game well, despite the self-reference involved with thinking about how to play better.
Consider a different problem: a group of people are posed some technical or mathematical challenge. Each individual person is given a different subset of the information about the problem, and each person knows what type of information every other participant gets.
Trivial example: you’re supposed to find the volume of a pyramid, you (participant 1) are given its height and the apex angles for two triangular faces, participant 2 is given the radius of the sphere on which all of the pyramid’s vertices lie and all angles of the triangular faces, participant 3 is given the areas of all faces, et cetera.
Given this setup, if you’re skilled at geometry, you can likely figure out which of the participants can solve the problem exactly, which can only put upper and lower bounds on the volume, and what those upper/lower bounds are for each participant. You don’t need to model your competitors’ mental states: all you need to do is reason about the object-level domain, plus take into account what information they have. No infinite recursion happens, because you can abstract out the particulars of how others’ minds work.
This works assuming that everyone involved is perfectly skilled at geometry: that you don’t need to predict what mistakes the others would make (which would depend on the messy details of their minds).
Speculatively, this would apply to deception as well. You don’t necessarily need to model others’ brain states directly. If they’re all perfectly skilled at deception, you can predict what deceptions they’d try to use and how effective they’d be based on purely objective information: the sociopolitical landscape, their individual skills and comparative advantages, et cetera. You can “skip to the end”: predict everyone playing their best-move-in-circumstances-where-everyone-else-plays-their-best-move-too.
Objectively, the distribution of comparative advantages is likely very different, so even if everyone makes their best move, some would hopelessly lose. (E. g., imagine if one of the experts is a close friend of a government official and the other is a controversial figure who’d been previously judged guilty of fraud.)
Speculatively, similar works for the MUP stuff. You don’t actually need to model the individual details of other universes. You can just use abstract reasoning to figure out what kinds of universes are dense across Tegmark IV, figure out what (distributions over) entities inhabit them, figure out (distributions over) how they’d reason, and what (distributions over) simulations they’d run, and to what (distribution over the) output this process converges given the objective material constraints involved. Then take actions that skew said distribution-over-the-output in a way you want.
Again, this is speculative: I don’t know that there are any math proofs that this is possible. But it seems plausible enough that something-like-this might work, and my understanding is that the MUP argument (and other kinds of acausal-trade setups) indeed uses this as a foundational assumption. (I. e., it assumes that the problem is isomorphic (in a relevant sense) to my pyramid challenge above.)
(IIRC, the Acausal Normalcy post outlines some of the relevant insights, though I think it doesn’t precisely focus on the topic at hand.)
The other 9 experts can also make guesses about which the signals the government will use and what the relative weaknesses of their fellow experts are, and the other 9 experts can also act on those guesses. So in order to reason about what the outcome of the search will be, you have to reason about both yourself and also about the other 9 experts, unless you somehow know that you are much better than the other 9 experts at steering the outcome of the search as a whole. But in that case only you can steer the search . The other 9 experts would fail if they tried to use the same strategy you’re using.
Okay if you accept this modified scenario where one expert knows they are much better than the other 9, then this is sufficient as a scenario that nostalgebraist claimed was difficult to imagine. So that’s enough to prove the point I was trying to make.
But the original example works too. It’s just a simultaneous move game. It’ll be won by whichever player is best at playing the game. It’s clearly possible to play the game well, despite the self-reference involved with thinking about how to play better.
Consider a different problem: a group of people are posed some technical or mathematical challenge. Each individual person is given a different subset of the information about the problem, and each person knows what type of information every other participant gets.
Trivial example: you’re supposed to find the volume of a pyramid, you (participant 1) are given its height and the apex angles for two triangular faces, participant 2 is given the radius of the sphere on which all of the pyramid’s vertices lie and all angles of the triangular faces, participant 3 is given the areas of all faces, et cetera.
Given this setup, if you’re skilled at geometry, you can likely figure out which of the participants can solve the problem exactly, which can only put upper and lower bounds on the volume, and what those upper/lower bounds are for each participant. You don’t need to model your competitors’ mental states: all you need to do is reason about the object-level domain, plus take into account what information they have. No infinite recursion happens, because you can abstract out the particulars of how others’ minds work.
This works assuming that everyone involved is perfectly skilled at geometry: that you don’t need to predict what mistakes the others would make (which would depend on the messy details of their minds).
Speculatively, this would apply to deception as well. You don’t necessarily need to model others’ brain states directly. If they’re all perfectly skilled at deception, you can predict what deceptions they’d try to use and how effective they’d be based on purely objective information: the sociopolitical landscape, their individual skills and comparative advantages, et cetera. You can “skip to the end”: predict everyone playing their best-move-in-circumstances-where-everyone-else-plays-their-best-move-too.
Objectively, the distribution of comparative advantages is likely very different, so even if everyone makes their best move, some would hopelessly lose. (E. g., imagine if one of the experts is a close friend of a government official and the other is a controversial figure who’d been previously judged guilty of fraud.)
Speculatively, similar works for the MUP stuff. You don’t actually need to model the individual details of other universes. You can just use abstract reasoning to figure out what kinds of universes are dense across Tegmark IV, figure out what (distributions over) entities inhabit them, figure out (distributions over) how they’d reason, and what (distributions over) simulations they’d run, and to what (distribution over the) output this process converges given the objective material constraints involved. Then take actions that skew said distribution-over-the-output in a way you want.
Again, this is speculative: I don’t know that there are any math proofs that this is possible. But it seems plausible enough that something-like-this might work, and my understanding is that the MUP argument (and other kinds of acausal-trade setups) indeed uses this as a foundational assumption. (I. e., it assumes that the problem is isomorphic (in a relevant sense) to my pyramid challenge above.)
(IIRC, the Acausal Normalcy post outlines some of the relevant insights, though I think it doesn’t precisely focus on the topic at hand.)