Well, there are other problems besides Newcomb. Something like UDT can be motivated by simulations, or amnesia, or just multiple copies of the AI trying to cooperate with each other. All these lead to pretty much the same theory, that’s why it’s worth thinking about.
Well, there are other problems besides Newcomb. Something like UDT can be motivated by simulations, or amnesia, or just multiple copies of the AI trying to cooperate with each other. All these lead to pretty much the same theory, that’s why it’s worth thinking about.
Thanks for your comment. I’ll look into those other problems.