One way of looking at DDT is “keeping it dumb in various ways.” I think another way of thinking about is just designing a different sort of agent, which is “dumb” according to us but not really dumb in an intrinsic sense. You can imagine this DDT agent looking at agents that do do acausal trade and thinking they’re just sacrificing utility for no reason.
There is some slight awkwardness in that the decision problems agents in this universe actually encounter means that UDT agents will get higher utility than DDT agents.
I agree that the maximum a posterior world doesn’t help that much, but I think there is some sense in which “having uncertainty” might be undesirable.
Also: I think making sure our agents are DDT is probably going to be approximately as difficult as making them aligned. Related: Your handle for anthropic uncertainty is:
never reason about anthropic uncertainty. DDT agents always think they know who they are.
“Always think they know who they are” doesn’t cut it; you can think you know you’re in a simulation. I think a more accurate version would be something like “Always think that you are on an original planet, i.e. one in which life appeared ‘naturally,’ rather than a planet in the midst of some larger interstellar civilization, or a simulation of a planet, or whatever. Basically, you need to believe that you were created by humans but that no intelligence played a role in the creation and/or arrangement of the humans who created you. Or… no role other than the “normal” one in which parents create offspring, governments create institutions, etc. I think this is a fairly specific belief, and I don’t think we have the ability to shape our AIs beliefs with that much precision, at least not yet.
One way of looking at DDT is “keeping it dumb in various ways.” I think another way of thinking about is just designing a different sort of agent, which is “dumb” according to us but not really dumb in an intrinsic sense. You can imagine this DDT agent looking at agents that do do acausal trade and thinking they’re just sacrificing utility for no reason.
There is some slight awkwardness in that the decision problems agents in this universe actually encounter means that UDT agents will get higher utility than DDT agents.
I agree that the maximum a posterior world doesn’t help that much, but I think there is some sense in which “having uncertainty” might be undesirable.
Also: I think making sure our agents are DDT is probably going to be approximately as difficult as making them aligned. Related: Your handle for anthropic uncertainty is:
“Always think they know who they are” doesn’t cut it; you can think you know you’re in a simulation. I think a more accurate version would be something like “Always think that you are on an original planet, i.e. one in which life appeared ‘naturally,’ rather than a planet in the midst of some larger interstellar civilization, or a simulation of a planet, or whatever. Basically, you need to believe that you were created by humans but that no intelligence played a role in the creation and/or arrangement of the humans who created you. Or… no role other than the “normal” one in which parents create offspring, governments create institutions, etc. I think this is a fairly specific belief, and I don’t think we have the ability to shape our AIs beliefs with that much precision, at least not yet.