This is an interesting approach. The way I’m currently thinking of this is that you ask what agent a UDT would design, and then do what that agent does, and vary what type an agent is between the different designs. Is this correct?
Consider the anti-Newcomb problem with Omega’s simulation involving equation (2)
So is this equation (2) with P replaced with something else?
However, the computing power allocated for evaluation the logical expectation value in (2) might be sufficient to suspect P’s output might be an agent reasoning based on (2).
The way I’m currently thinking of this is that you ask what agent a UDT would design, and then do what that agent does, and vary what type an agent is between the different designs. Is this correct?
Sounds about right.
So is this equation (2) with P replaced with something else?
No, it’s the same P. When I say “Omega’s simulation doesn’t involve P” I mean Omega is not executing P and using the result. Omega is using equation (2) directly, but P still enters into equation (2).
I don’t understand this sentence.
Logical uncertainty (the way I see it), is a way to use a certain amount of computing resources to assign probabilities to the outcomes of a computation requiring a larger amount of computing resource. These probabilities depend on the specific resource bound. However, this is not essential to the point I’m making. The point I’m making is that if the logical uncertainty ensemble assigns non-zero probability to P producing XDT, we end up with logical correlation that is better avoided.
This is an interesting approach. The way I’m currently thinking of this is that you ask what agent a UDT would design, and then do what that agent does, and vary what type an agent is between the different designs. Is this correct?
So is this equation (2) with P replaced with something else?
I don’t understand this sentence.
Hi jessicat, thx for commenting!
Sounds about right.
No, it’s the same P. When I say “Omega’s simulation doesn’t involve P” I mean Omega is not executing P and using the result. Omega is using equation (2) directly, but P still enters into equation (2).
Logical uncertainty (the way I see it), is a way to use a certain amount of computing resources to assign probabilities to the outcomes of a computation requiring a larger amount of computing resource. These probabilities depend on the specific resource bound. However, this is not essential to the point I’m making. The point I’m making is that if the logical uncertainty ensemble assigns non-zero probability to P producing XDT, we end up with logical correlation that is better avoided.