It still seems to me that you can’t have a BestDecisionAgent. Suppose agents are black boxes—Omegas can simulate agents at will, but not view their source code. An Omega goes around offering agents a choice between:
$1, or
$100 if the Omega thinks the agent acts differently than BestDecisionAgent in a simulated rationality test, otherwise $2 if the agent acts like BestDecisionAgent in the rationality test.
Does this test meet your criteria for a fair test? If not, why not?
I think I have left a loophole. In your example, Omega is analysing the agent by analysing its outputs in unrelated, and most of all, unspecified problems. I think the end result should only depend on the output of the agent on the problem at hand.
Here’s a possibly real life variation. Instead of simulating the agent, you throw a number of problems at it beforehand, without telling it it will be related to a future problem. Like, throw an exam at a human student (with a real stake at the end, such as grades). Then, later you submit the student to the following problem:
Welcome to my dungeon. Sorry for the headache, but I figured you wouldn’t have followed someone like me in a place like this. Anyway. I was studying Decision Theory, and wanted to perform an experiment. So, I will give you a choice:
Option 1 : you die a most painful death. See those sharp, shimmering tools? Lots of fun.
Option 2 : if I think you’re not the kind of person who makes good life decisions, I’ll let you go unharmed. Hopefully you will harm yourself later. On the other hand, if I think you are the kind of person who makes good life decisions, well, too bad for you: I’ll let you most of you go, but you’ll have to give me your hand.
Option 2? Well that doesn’t surprise me, though it does disappoint me a little. I would have hoped, after 17 times already… well, no matter. So, do you make good decisions? Sorry, I’m afraid “no” isn’t enough. Let’s see… oh, you’re you’re applying for College, if I recall correctly. Yes, I did my homework. I’m studying, remember? So, let’s see your SAT scores. Oh, impressive. That should explain why you never left home those past three weeks. Looks like you know how to trade off short term well being for long term projects. Looks like a good life decision.
So. I’m not exactly omniscient, but this should be enough. I’ll let you go. But first, I believe you’ll have to put up with a little surgery job.
Sounds like something like that could “reasonably” happen in real life. But I don’t think it’s “fair” either, if only because being discriminated for being capable of taking good decisions is so unexpected.
Omega gives you a choice of either $1 or $X, where X is either 2 or 100
Yes, that’s what I mean. I’d like to know what, if anything, is wrong with this argument that no decision theory can be optimal.
Suppose that there were a computable decision theory T that was at least as good as all other theories. In any fair problem, no other decision theory could recommend actions with better expected outcomes than the expected outcomes of T’s recommended actions.
We can construct a computable agent, BestDecisionAgent, using theory T.
For any fair problem, no computable agent can perform better (on average) than BestDecisionAgent.
Call the problem presented in the grandfather post the Prejudiced Omega Problem. In the Prejudiced Omega Problem, BestDecisionAgent will almost assuredly collect $2.
In the Prejudiced Omega Problem, another agent can almost assuredly collect $100.
The Prejudiced Omega Problem does not involve an Omega inspecting the source code of the agent.
The Prejudiced Omega Problem, like Newcomb’s problem, is fair.
Contradiction
I’m not asserting this argument is correct—I just want to know where people disagree with it.
Let BestDecisionAgent choose the $1 with probability p. Then the various outcomes are:
Simulation's choice | Our Choice | Payoff
$1 | $1 = $1
$1 | $2 or $100 = $100
$2 or $100 | $1 = $1
$2 or $100 | $2 or $100 = $2
And so p should be chosen to maximise p^2 + 100p(1-p) + p(1-p) + 2(1-p)^2. This is equal to the quadratic −98p^2 + 97p + 2, which Wolfram Alpha says is maximised by p = 97⁄196, for a expected payoff of ~$26.
If we are not BestDecisionAgent, and so are allowed to choose separately, we aim to maximise pq + 100p(1-q) + q(1-p) + 2(1-p)(1-q), which simplifies to −98pq+98p-q+2, which is maximized by q = 0, for a payoff of ~$50.5. This surprises me, I was expecting to get p = q.
So (3) and (4) are not quite right, but the result is similar. I suspect BestDecisionAgent should be able to pick p such that p = q is the best option for any agent, at the cost of reducing the value it gets.
ETA: Of course you can do this just by setting p = 0, which is what you assume. Which, actually, means that (3) and (4) contradict each other: if BestDecisionAgent always picks the $2 over the $1, then the best any agent can do is $2.
(Incidentally, how do you format tables properly in comments?)
$100 if the Omega thinks the agent acts differently than BestDecisionAgent in a simulated rationality test, otherwise $2 if the agent acts like BestDecisionAgent in the rationality test.
The Omega chooses payoff of $2 vs. $100 based off of a separate test that can differentiate between BestDecisionAgent and some other agent. If we are BestDecisionAgent, the Omega will know this and will be offered at most a $2 payoff. But some other agent will be different from BestDecisionAgent in a way that the Omega detects and cares about. That agent can decide between $1 and $100. Since another agent can perform better than BestDecisionAgent, BestDecisionAgent cannot be optimal.
Ah, ok. In that case though, the other agent wins at this game at the expense of failing at some other game. Depending on what types of games the agent is likely to encounter, this agents effectiveness may or may not actually be better than BestDecisionAgent. So we could possibly have an optimal decision agent in the sense that no change to its algorithm could increase its expected lifetime utility, but not to the extent of not failing in any game.
It still seems to me that you can’t have a BestDecisionAgent. Suppose agents are black boxes—Omegas can simulate agents at will, but not view their source code. An Omega goes around offering agents a choice between:
$1, or
$100 if the Omega thinks the agent acts differently than BestDecisionAgent in a simulated rationality test, otherwise $2 if the agent acts like BestDecisionAgent in the rationality test.
Does this test meet your criteria for a fair test? If not, why not?
I think I have left a loophole. In your example, Omega is analysing the agent by analysing its outputs in unrelated, and most of all, unspecified problems. I think the end result should only depend on the output of the agent on the problem at hand.
Here’s a possibly real life variation. Instead of simulating the agent, you throw a number of problems at it beforehand, without telling it it will be related to a future problem. Like, throw an exam at a human student (with a real stake at the end, such as grades). Then, later you submit the student to the following problem:
Sounds like something like that could “reasonably” happen in real life. But I don’t think it’s “fair” either, if only because being discriminated for being capable of taking good decisions is so unexpected.
Omega gives you a choice of either $1 or $X, where X is either 2 or 100?
It seems like you must have meant something else, but I can’t figure it out.
Yes, that’s what I mean. I’d like to know what, if anything, is wrong with this argument that no decision theory can be optimal.
Suppose that there were a computable decision theory T that was at least as good as all other theories. In any fair problem, no other decision theory could recommend actions with better expected outcomes than the expected outcomes of T’s recommended actions.
We can construct a computable agent, BestDecisionAgent, using theory T.
For any fair problem, no computable agent can perform better (on average) than BestDecisionAgent.
Call the problem presented in the grandfather post the Prejudiced Omega Problem. In the Prejudiced Omega Problem, BestDecisionAgent will almost assuredly collect $2.
In the Prejudiced Omega Problem, another agent can almost assuredly collect $100.
The Prejudiced Omega Problem does not involve an Omega inspecting the source code of the agent.
The Prejudiced Omega Problem, like Newcomb’s problem, is fair.
Contradiction
I’m not asserting this argument is correct—I just want to know where people disagree with it.
Qiaochu_Yuan’s post is related.
Let BestDecisionAgent choose the $1 with probability p. Then the various outcomes are:
And so p should be chosen to maximise p^2 + 100p(1-p) + p(1-p) + 2(1-p)^2. This is equal to the quadratic −98p^2 + 97p + 2, which Wolfram Alpha says is maximised by p = 97⁄196, for a expected payoff of ~$26.
If we are not BestDecisionAgent, and so are allowed to choose separately, we aim to maximise pq + 100p(1-q) + q(1-p) + 2(1-p)(1-q), which simplifies to −98pq+98p-q+2, which is maximized by q = 0, for a payoff of ~$50.5. This surprises me, I was expecting to get p = q.
So (3) and (4) are not quite right, but the result is similar. I suspect BestDecisionAgent should be able to pick p such that p = q is the best option for any agent, at the cost of reducing the value it gets.
ETA: Of course you can do this just by setting p = 0, which is what you assume. Which, actually, means that (3) and (4) contradict each other: if BestDecisionAgent always picks the $2 over the $1, then the best any agent can do is $2.
(Incidentally, how do you format tables properly in comments?)
The Omega chooses payoff of $2 vs. $100 based off of a separate test that can differentiate between BestDecisionAgent and some other agent. If we are BestDecisionAgent, the Omega will know this and will be offered at most a $2 payoff. But some other agent will be different from BestDecisionAgent in a way that the Omega detects and cares about. That agent can decide between $1 and $100. Since another agent can perform better than BestDecisionAgent, BestDecisionAgent cannot be optimal.
Ah, ok. In that case though, the other agent wins at this game at the expense of failing at some other game. Depending on what types of games the agent is likely to encounter, this agents effectiveness may or may not actually be better than BestDecisionAgent. So we could possibly have an optimal decision agent in the sense that no change to its algorithm could increase its expected lifetime utility, but not to the extent of not failing in any game.