Well, you’re wrong. The problem statement is completely clear and can be implemented with something like cellular automata: two hungry agents in two separate rooms within a bigger world, each one choosing to generate a snack that it will then eat. Logical dependence between their decisions must be inferred in the usual way, it’s not part of the problem specification. If your formalism says such problems are not well-defined, whoops, too bad for the formalism! (Didn’t you also give up your ability to meaningfully talk about the PD, by the way?)
(Didn’t you also give up your ability to meaningfully talk about the PD, by the way?)
No, in PD each agent knows where it is and what utility value it gets. In PD with identical players the problem might be similar if it’s postulated that the two agents get different utilities.
Identical agents can’t get different utilities in the internal sense of what’s referred by their decision problems (and any other sense is decision-theoretically irrelevant, since the agent can’t work with what it can’t work with), because definition of utility is part of the decision problem, which in turn is part of the agent (or even whole of the agent).
When you’re playing a lottery, you’re deciding based on utility of the lottery, not on utility of inaccessible (and in this sense, meaningless to the agent) “actual outcome”. Utility of the unknown outcome is not what plays the role of utility in agent’s decision problem, hence we have a case of equivocation.
Well, you’re wrong. The problem statement is completely clear and can be implemented with something like cellular automata
I understand the environment specified in the problem statement, but not the decision problem.
If your formalism says such problems are not well-defined, whoops, too bad for the formalism!
Well, maybe, but the statement that everything is totally clear doesn’t help me understand better. I can intuitively guess what is intended, but that’s different from actually seeing all the pieces of the puzzle.
Edit: Well, I guess I should indicate situations where I “don’t understand” in the sense of not understanding to my satisfaction, as opposed to pretending to not understand what doesn’t fit my models or because the questions is expected to be confusing to most readers. Sometimes I’m confused not because of an apparent property of a question, but because I’m trying to solve some obscure aspect of it that isn’t on everyone’s mind.
Well, you’re wrong. The problem statement is completely clear and can be implemented with something like cellular automata: two hungry agents in two separate rooms within a bigger world, each one choosing to generate a snack that it will then eat. Logical dependence between their decisions must be inferred in the usual way, it’s not part of the problem specification. If your formalism says such problems are not well-defined, whoops, too bad for the formalism! (Didn’t you also give up your ability to meaningfully talk about the PD, by the way?)
No, in PD each agent knows where it is and what utility value it gets. In PD with identical players the problem might be similar if it’s postulated that the two agents get different utilities.
Identical agents can’t get different utilities in the internal sense of what’s referred by their decision problems (and any other sense is decision-theoretically irrelevant, since the agent can’t work with what it can’t work with), because definition of utility is part of the decision problem, which in turn is part of the agent (or even whole of the agent).
When you’re playing a lottery, you’re deciding based on utility of the lottery, not on utility of inaccessible (and in this sense, meaningless to the agent) “actual outcome”. Utility of the unknown outcome is not what plays the role of utility in agent’s decision problem, hence we have a case of equivocation.
I understand the environment specified in the problem statement, but not the decision problem.
Well, maybe, but the statement that everything is totally clear doesn’t help me understand better. I can intuitively guess what is intended, but that’s different from actually seeing all the pieces of the puzzle.
Edit: Well, I guess I should indicate situations where I “don’t understand” in the sense of not understanding to my satisfaction, as opposed to pretending to not understand what doesn’t fit my models or because the questions is expected to be confusing to most readers. Sometimes I’m confused not because of an apparent property of a question, but because I’m trying to solve some obscure aspect of it that isn’t on everyone’s mind.