Well, as I see UDT, it also makes decisions locally, with understanding that this local computation is meant to find the best global solution given other such locally computed decisions. That is, each local computation can make a mistake, making the best global solution impossible, which may make it very important for the other local computations to predict (or at least notice) this mistake and find the local decisions that together with this mistake constitute the best remaining global solution, and so on. The structure of states of knowledge produced by the local computations for the adjacent local computations is meant to optimize the algorithm of local decision-making in those states, giving most of the answer explicitly, leaving the local computations to only move the goalpost a little bit.
The nontrivial form of the decision-making comes from the loop that makes local decisions maximize preference given the other local decisions, and those other local decisions do the same. Thus, the local decisions have to coordinate with each other, and they can do that only through the common algorithm and logical dependencies between different states of knowledge.
At which point the fact that these local decisions are part of the same agent seems to become irrelevant, so that a more general problem needs to be solved, one of cooperation of any kinds of agents, or even more generally processes that aren’t exactly “agents”.
One thing I don’t understand is that both you and Eliezer talk confidently about how agents would make use of logical dependencies/correlations. You guys don’t seem to think this is a really hard problem.
But we don’t even know how to assign a probability (or whether it even makes sense to do so) to a simple mathematical statement like P=NP. How do we calculate and/or represent the correlation between one agent and another agent (except in simple cases like where they’re identical or easily proven to be equivalent)? I’m impressed by how far you’ve managed to push the idea of updatelessness, but it’s hard for me to process what you say, when the basic concept of logical uncertainty is still really fuzzy.
I can argue pretty forcefully that (1) a causal graph in which uncertainty has been factored into uncorrelated sources, must have nodes or some kind of elements corresponding to logical uncertainty; (2) that in presenting Newcomblike problems, the dilemma-presenters are in fact talking of such uncertainties and correlations; (3) that human beings use logical uncertainty all the time in an intuitive sense, to what seems like good effect.
Of course none of that is actually having a good formal theory of logical uncertainty—I just drew a boundary rope around a few simple logical inferences and grafted them onto causal graphs. Two-way implications get represented by the same node, that sort of thing.
I would be drastically interested in a formal theory of logical uncertainty for non-logically-omniscient Bayesians.
Meanwhile—you’re carrying out logical reasoning about whole other civilizations starting from a vague prior over their origins, every time you reason that most superintelligences (if any) that you encounter in faraway galaxies, will have been built in such a way as to maximize a utility function rather than say choosing the first option in alphabetical order, on the likes of true PDs.
I want to try to understand the nature of logical correlations between agents a bit better.
Consider two agents who are both TDT-like but not perfectly correlated. They play a one-shot PD but in turn. First one player moves, then the other sees the move and makes its move.
In normal Bayesian reasoning, once the second player sees the first player’s move, all correlation between them disappears. (Does this happen in your TDT?) But in UDT, the second player doesn’t update, so the correlation is preserved. So far so good.
Now consider what happens if the second player has more computing power than the first, so that it can perfectly simulate the first player and compute its move. After it finishes that computation and knows the first player’s move, the logical correlation between them disappears, because no uncertainty implies no correlation. So, given there’s no logical correlation, it ought to play D. The first player would have expected that, and also played D.
Looking at my formulation of UDT, this may or may not happen, depending on what the “mathematical intuition subroutine” does when computing the logical consequences of a choice. If it tries to be maximally correct, then it would do a full simulation of the opponent when it can, which removes logical correlation, which causes the above outcome. Maybe the second player could get a better outcome if it doesn’t try to be maximally correct, but the way my theory is formulated, what strategy the “mathematical intuition subroutine” uses is not part of what’s being optimized.
So, I’m not sure what to do about this, except to add it to the pile of unsolved problems.
Coming to this a bit late :), but I’ve got a basic question (which I think is similar to Nesov’s, but I’m still confused after reading the ensuing exchange). Why would it be that,
The first player would have expected that, and also played D.
If the second player has more computer power (so that the first player cannot simulate it), how can the first player predict what the second player will do? Can the first player reason that since the second player could simulate it, the second player will decide that they’re uncorrelated and play D no matter what?
That dependence on computing power seems very odd, though maybe I’m sneaking in expectations from my (very rough) understanding of UDT.
Now consider what happens if the second player has more computing power than the first, so that it can perfectly simulate the first player and compute its move. After it finishes that computation and knows the first player’s move, the logical correlation between them disappears, because no uncertainty implies no correlation. So, given there’s no logical correlation, it ought to play D. The first player would have expected that, and also played D.
The first player’s move could depend on the second player’s, in which case the second player won’t get the answer is a closed form, the answer must be a function of the second player’s move...
But if the second player has more computational power, it can just keep simulating the first player until the first player runs out of clock cycles and has to output something.
I don’t understand your reply: exact simulation is brute force that isn’t a good idea. You can prove general statements about the behavior of programs on runs of unlimited or infinite length in finite time. But anyway, why would the second player provoke mutual defection?
But anyway, why would the second player provoke mutual defection?
In my formulation, it doesn’t have a choice. Whether or not it does exact simulation of the first player is determined by its “mathematical intuition subroutine”, which I treated as a black box. If that module does an exact simulation, then mutual defection is the result. So this also ties in with my lack of understanding regarding logical uncertainty. If we don’t treat the thing that reasons about logical uncertainty as a black box, what should we do?
ETA: Sometimes exact simulation clearly is appropriate, for example in rock-paper-scissors.
Conceptually, I treat logical uncertainty as I do prior+utility, a representation of preference, in this more general case over mathematical structures. The problems of representing this preference compactly and extracting human preference don’t hinder these particular explorations.
Well, as I see UDT, it also makes decisions locally, with understanding that this local computation is meant to find the best global solution given other such locally computed decisions. That is, each local computation can make a mistake, making the best global solution impossible, which may make it very important for the other local computations to predict (or at least notice) this mistake and find the local decisions that together with this mistake constitute the best remaining global solution, and so on. The structure of states of knowledge produced by the local computations for the adjacent local computations is meant to optimize the algorithm of local decision-making in those states, giving most of the answer explicitly, leaving the local computations to only move the goalpost a little bit.
The nontrivial form of the decision-making comes from the loop that makes local decisions maximize preference given the other local decisions, and those other local decisions do the same. Thus, the local decisions have to coordinate with each other, and they can do that only through the common algorithm and logical dependencies between different states of knowledge.
At which point the fact that these local decisions are part of the same agent seems to become irrelevant, so that a more general problem needs to be solved, one of cooperation of any kinds of agents, or even more generally processes that aren’t exactly “agents”.
One thing I don’t understand is that both you and Eliezer talk confidently about how agents would make use of logical dependencies/correlations. You guys don’t seem to think this is a really hard problem.
But we don’t even know how to assign a probability (or whether it even makes sense to do so) to a simple mathematical statement like P=NP. How do we calculate and/or represent the correlation between one agent and another agent (except in simple cases like where they’re identical or easily proven to be equivalent)? I’m impressed by how far you’ve managed to push the idea of updatelessness, but it’s hard for me to process what you say, when the basic concept of logical uncertainty is still really fuzzy.
I can argue pretty forcefully that (1) a causal graph in which uncertainty has been factored into uncorrelated sources, must have nodes or some kind of elements corresponding to logical uncertainty; (2) that in presenting Newcomblike problems, the dilemma-presenters are in fact talking of such uncertainties and correlations; (3) that human beings use logical uncertainty all the time in an intuitive sense, to what seems like good effect.
Of course none of that is actually having a good formal theory of logical uncertainty—I just drew a boundary rope around a few simple logical inferences and grafted them onto causal graphs. Two-way implications get represented by the same node, that sort of thing.
I would be drastically interested in a formal theory of logical uncertainty for non-logically-omniscient Bayesians.
Meanwhile—you’re carrying out logical reasoning about whole other civilizations starting from a vague prior over their origins, every time you reason that most superintelligences (if any) that you encounter in faraway galaxies, will have been built in such a way as to maximize a utility function rather than say choosing the first option in alphabetical order, on the likes of true PDs.
I want to try to understand the nature of logical correlations between agents a bit better.
Consider two agents who are both TDT-like but not perfectly correlated. They play a one-shot PD but in turn. First one player moves, then the other sees the move and makes its move.
In normal Bayesian reasoning, once the second player sees the first player’s move, all correlation between them disappears. (Does this happen in your TDT?) But in UDT, the second player doesn’t update, so the correlation is preserved. So far so good.
Now consider what happens if the second player has more computing power than the first, so that it can perfectly simulate the first player and compute its move. After it finishes that computation and knows the first player’s move, the logical correlation between them disappears, because no uncertainty implies no correlation. So, given there’s no logical correlation, it ought to play D. The first player would have expected that, and also played D.
Looking at my formulation of UDT, this may or may not happen, depending on what the “mathematical intuition subroutine” does when computing the logical consequences of a choice. If it tries to be maximally correct, then it would do a full simulation of the opponent when it can, which removes logical correlation, which causes the above outcome. Maybe the second player could get a better outcome if it doesn’t try to be maximally correct, but the way my theory is formulated, what strategy the “mathematical intuition subroutine” uses is not part of what’s being optimized.
So, I’m not sure what to do about this, except to add it to the pile of unsolved problems.
Coming to this a bit late :), but I’ve got a basic question (which I think is similar to Nesov’s, but I’m still confused after reading the ensuing exchange). Why would it be that,
If the second player has more computer power (so that the first player cannot simulate it), how can the first player predict what the second player will do? Can the first player reason that since the second player could simulate it, the second player will decide that they’re uncorrelated and play D no matter what?
That dependence on computing power seems very odd, though maybe I’m sneaking in expectations from my (very rough) understanding of UDT.
The first player’s move could depend on the second player’s, in which case the second player won’t get the answer is a closed form, the answer must be a function of the second player’s move...
But if the second player has more computational power, it can just keep simulating the first player until the first player runs out of clock cycles and has to output something.
I don’t understand your reply: exact simulation is brute force that isn’t a good idea. You can prove general statements about the behavior of programs on runs of unlimited or infinite length in finite time. But anyway, why would the second player provoke mutual defection?
In my formulation, it doesn’t have a choice. Whether or not it does exact simulation of the first player is determined by its “mathematical intuition subroutine”, which I treated as a black box. If that module does an exact simulation, then mutual defection is the result. So this also ties in with my lack of understanding regarding logical uncertainty. If we don’t treat the thing that reasons about logical uncertainty as a black box, what should we do?
ETA: Sometimes exact simulation clearly is appropriate, for example in rock-paper-scissors.
Conceptually, I treat logical uncertainty as I do prior+utility, a representation of preference, in this more general case over mathematical structures. The problems of representing this preference compactly and extracting human preference don’t hinder these particular explorations.
I don’t understand this yet. Can you explain in more detail what is a general (noncompact) way to representing logical uncertainty?