Under normal decision theory, you can imagine that an agent is asking you the reader how they should decide, and then they will do it. You can’t consistently imagine Omega’s coin-flip agents doing that, since Omega has preprogrammed them to ignore whatever you say.
This is a much stronger constraint than ordinary agent determinism, since a deterministic agent can take different actions based on sensory input, such as a response to a question about why one action is better than another. In respect of this particular action, I would hesitate to call one of the Omega-created entities an agent at all.
They are certainly not rational agents, and not really suitable objects for examining whether any given decision theory is suitable for rational agents.
I think they can be agents, at least if Omega gave them a decision theory that produces the output determined by the coin flip. I mean, then it’s no different then when you normally program an agent with a decision theory. Whether they are rational agents then depends on whether you call e.g. Causal Decision Theory-agents rational—I’d probably say no, but many would disagree, I’m guessing.
Under normal decision theory, you can imagine that an agent is asking you the reader how they should decide, and then they will do it. You can’t consistently imagine Omega’s coin-flip agents doing that, since Omega has preprogrammed them to ignore whatever you say.
This is a much stronger constraint than ordinary agent determinism, since a deterministic agent can take different actions based on sensory input, such as a response to a question about why one action is better than another. In respect of this particular action, I would hesitate to call one of the Omega-created entities an agent at all.
They are certainly not rational agents, and not really suitable objects for examining whether any given decision theory is suitable for rational agents.
I think they can be agents, at least if Omega gave them a decision theory that produces the output determined by the coin flip. I mean, then it’s no different then when you normally program an agent with a decision theory. Whether they are rational agents then depends on whether you call e.g. Causal Decision Theory-agents rational—I’d probably say no, but many would disagree, I’m guessing.