Anthropic Decision Theory (ADT): An agent should first find all the decisions linked with their own. Then they should maximise expected utility, acting as if they simultaneously controlled the outcomes of all linked decisions, and using the objective (non-anthropic) probabilities of the various worlds.
...is there any difference from Yudkowsky (2010) (appended below)...?
The timeless decision procedure evaluates expected utility conditional upon the output of an abstract decision computation—the very same computation that is currently executing as a timeless decision procedure—and returns that output such that the universe will possess maximum expected utility, conditional upon the abstract computation returning that output.
The other problem I see with this kind of material is that it seems kinda obvious. It basically says to maximise expected utility—with the reminder that identical deterministic calculations in different places should return the same outcome. However, most people already know that identical deterministic calculations performed in different places should return the same outcome—that’s just uniformitarianism—something which is often taken for granted. Reminders of what we already know are OK—but they don’t always add very much.
The other problem I see with this kind of material is that it seems kinda obvious
Then I’ve succeeded in my presentation. Nobody was saying what I was saying about anthropic behaviour until I started talking about it; if now it’s kinda obvious, then that’s great.
Regarding:
...is there any difference from Yudkowsky (2010) (appended below)...?
It’s all related, but here deployed for the first time in Anthropic reasoning.
The other problem I see with this kind of material is that it seems kinda obvious. It basically says to maximise expected utility—with the reminder that identical deterministic calculations in different places should return the same outcome. However, most people already know that identical deterministic calculations performed in different places should return the same outcome—that’s just uniformitarianism—something which is often taken for granted. Reminders of what we already know are OK—but they don’t always add very much.
Then I’ve succeeded in my presentation. Nobody was saying what I was saying about anthropic behaviour until I started talking about it; if now it’s kinda obvious, then that’s great.