...how is what you wrote different from standard expected utility maximization?
Also, probability might be controlled by the agent as well, so use probability(W,D), and selecting individual worlds might not be a good idea (instead, sum over a partition into sufficiently uniform events).
...how is what you wrote different from standard expected utility maximization?
My formula deals with multiple agents having different utility functions.
Also, probability might be controlled by the agent as well, so use probability(W,D)
That simple? It’s not immediately clear to me. Could you give an example? For some reason I thought that the formula should become even more complicated in such cases.
My formula deals with multiple agents having different utility functions.
As I discussed on the decision theory list, this problem is not well-defined. All decisions must be performed in the service of a particular decision problem. You can’t be uncertain about which decision problem you’re solving, but the decision problem that you’re solving can be logically opaque, so that you have logical uncertainty about its elements. In particular, you can have utility symbol that’s defined as [U=if Q U1 else U2] where Q is a complicated statement. This doesn’t extend to multiple agents, where you have to analyze the problem from one particular agent’s standpoint, although analysis of a game-theoretic situation might yield similar steps.
You can control the probability of current observational situation e.g. in transparent Newcomb’s. (You can also easily define events whose probability you control, if there’s no requirement that such events are of any relevance to the problem, by including possible worlds in the events conditionally on a logical statement that you control.)
As I discussed on the decision theory list, this problem is not well-defined.
Does this mean you disagree with Stuart’s solution to the fourth model (which involves multiple agents with different utility functions)? Can you point out the mistake? My formula was just a formalization of Stuart’s idea, I may be missing something obvious here...
I don’t understand what “itself” means in “Here the agent only derives utility from the box or cross it generated itself.”, given that we have a world with two identical agents, which is better described as a world with one agent controlling it through two control sites (through a dependence that acts on both sites). I think it’s a bad idea to discuss virtues of methods of solving a problem whose statement isn’t clear.
Well, you’re wrong. The problem statement is completely clear and can be implemented with something like cellular automata: two hungry agents in two separate rooms within a bigger world, each one choosing to generate a snack that it will then eat. Logical dependence between their decisions must be inferred in the usual way, it’s not part of the problem specification. If your formalism says such problems are not well-defined, whoops, too bad for the formalism! (Didn’t you also give up your ability to meaningfully talk about the PD, by the way?)
(Didn’t you also give up your ability to meaningfully talk about the PD, by the way?)
No, in PD each agent knows where it is and what utility value it gets. In PD with identical players the problem might be similar if it’s postulated that the two agents get different utilities.
Identical agents can’t get different utilities in the internal sense of what’s referred by their decision problems (and any other sense is decision-theoretically irrelevant, since the agent can’t work with what it can’t work with), because definition of utility is part of the decision problem, which in turn is part of the agent (or even whole of the agent).
When you’re playing a lottery, you’re deciding based on utility of the lottery, not on utility of inaccessible (and in this sense, meaningless to the agent) “actual outcome”. Utility of the unknown outcome is not what plays the role of utility in agent’s decision problem, hence we have a case of equivocation.
Well, you’re wrong. The problem statement is completely clear and can be implemented with something like cellular automata
I understand the environment specified in the problem statement, but not the decision problem.
If your formalism says such problems are not well-defined, whoops, too bad for the formalism!
Well, maybe, but the statement that everything is totally clear doesn’t help me understand better. I can intuitively guess what is intended, but that’s different from actually seeing all the pieces of the puzzle.
Edit: Well, I guess I should indicate situations where I “don’t understand” in the sense of not understanding to my satisfaction, as opposed to pretending to not understand what doesn’t fit my models or because the questions is expected to be confusing to most readers. Sometimes I’m confused not because of an apparent property of a question, but because I’m trying to solve some obscure aspect of it that isn’t on everyone’s mind.
...how is what you wrote different from standard expected utility maximization?
Also, probability might be controlled by the agent as well, so use probability(W,D), and selecting individual worlds might not be a good idea (instead, sum over a partition into sufficiently uniform events).
My formula deals with multiple agents having different utility functions.
That simple? It’s not immediately clear to me. Could you give an example? For some reason I thought that the formula should become even more complicated in such cases.
As I discussed on the decision theory list, this problem is not well-defined. All decisions must be performed in the service of a particular decision problem. You can’t be uncertain about which decision problem you’re solving, but the decision problem that you’re solving can be logically opaque, so that you have logical uncertainty about its elements. In particular, you can have utility symbol that’s defined as [U=if Q U1 else U2] where Q is a complicated statement. This doesn’t extend to multiple agents, where you have to analyze the problem from one particular agent’s standpoint, although analysis of a game-theoretic situation might yield similar steps.
You can control the probability of current observational situation e.g. in transparent Newcomb’s. (You can also easily define events whose probability you control, if there’s no requirement that such events are of any relevance to the problem, by including possible worlds in the events conditionally on a logical statement that you control.)
Does this mean you disagree with Stuart’s solution to the fourth model (which involves multiple agents with different utility functions)? Can you point out the mistake? My formula was just a formalization of Stuart’s idea, I may be missing something obvious here...
I don’t understand what “itself” means in “Here the agent only derives utility from the box or cross it generated itself.”, given that we have a world with two identical agents, which is better described as a world with one agent controlling it through two control sites (through a dependence that acts on both sites). I think it’s a bad idea to discuss virtues of methods of solving a problem whose statement isn’t clear.
Well, you’re wrong. The problem statement is completely clear and can be implemented with something like cellular automata: two hungry agents in two separate rooms within a bigger world, each one choosing to generate a snack that it will then eat. Logical dependence between their decisions must be inferred in the usual way, it’s not part of the problem specification. If your formalism says such problems are not well-defined, whoops, too bad for the formalism! (Didn’t you also give up your ability to meaningfully talk about the PD, by the way?)
No, in PD each agent knows where it is and what utility value it gets. In PD with identical players the problem might be similar if it’s postulated that the two agents get different utilities.
Identical agents can’t get different utilities in the internal sense of what’s referred by their decision problems (and any other sense is decision-theoretically irrelevant, since the agent can’t work with what it can’t work with), because definition of utility is part of the decision problem, which in turn is part of the agent (or even whole of the agent).
When you’re playing a lottery, you’re deciding based on utility of the lottery, not on utility of inaccessible (and in this sense, meaningless to the agent) “actual outcome”. Utility of the unknown outcome is not what plays the role of utility in agent’s decision problem, hence we have a case of equivocation.
I understand the environment specified in the problem statement, but not the decision problem.
Well, maybe, but the statement that everything is totally clear doesn’t help me understand better. I can intuitively guess what is intended, but that’s different from actually seeing all the pieces of the puzzle.
Edit: Well, I guess I should indicate situations where I “don’t understand” in the sense of not understanding to my satisfaction, as opposed to pretending to not understand what doesn’t fit my models or because the questions is expected to be confusing to most readers. Sometimes I’m confused not because of an apparent property of a question, but because I’m trying to solve some obscure aspect of it that isn’t on everyone’s mind.