I’m not sure how you can implement an admonition to Win and not just to (truly, sincerely) try. What is the empirical difference?
I suppose you could use an expected regret measure (that is, the difference between the ideal result and the result of the decision summed across the distribution of probable futures) instead of an expected utility measure.
Expected regret tends to produce more robust strategies than expected utility. For instance, in Newcomb’s problem, we could say that two-boxing comes from expected utility but one-boxing comes from regret-minimizing (since a “failed” two-box gives $1,000,000-$1,000=$999,000 of regret, if you believe Omega would have acted differently if you had been the type of person to one-box, where a “failed” one-box gives $1000-$0=$1,000 of regret).
Using more robust strategies may be a way to more consistently Win, though perhaps the true goal should be to know when to use expected utility and when to use expected regret (and therefore to take advantage both of potential bonanzas and of risk-limiting mechanisms).
I’m quite confident there is only a language difference
between eliezer’s description and the point a number of you
have just made. Winning versus trying to win are clearly
two different things, and it’s also clear that
“genuinely trying to win” is the best one can do, based
on the definition those in this thread are using. But
Eli’s point on ob was that telling oneself “I’m genuinely
trying to win” often results in less than genuinely trying.
It results in “trying to try”...which means being
satisfied by a display of effort rather than utility maximizing. So instead, he arguesn why not say to oneself the imperative
“Win!”, where he bakes the “try” part into the implicit imperative.
I agree eli’s language usage here may be
slightly non standard for most of us (me included)
and therefore perhaps misleading to the uninitiated,
but I’m doubtful we need to stress about it too
much if the facts are as I’ve stated. Does anyone
disagree? Perhaps one could argue eli should have to say,
“Rational agents should win_eli” and link to an
Explanation like this thread, if we are genuinely concerned
about people getting confused.
Yes! This functional difference is very important!
In Logic, you begin with a set of non-contradicting assumptions and then build a consistent theory based on those assumptions. The deductions you make are analogous to being rational. If the assumptions are non-contradicting, then it is impossible to deduce something false in the system. (Analogously, it is impossible for rationality not to win.) However, you can get a paradox by having a self-referential statement. You can prove that every sufficiently complex theory is not closed—there are things that are true that you can’t prove from within the system. Along the same lines, you can build a paradox by forcing the system to try to talk about it itself.
What Grobstein has presented is a classic paradox and is the closest you can come to rationality not winning.
I understand all that, but I still think it’s impossible to operationalize an admonition to Win. If
Omega says that Box B is empty if you try to win what’s inside it.
then you simply cannot implement a strategy that will give you the proceeds of Box B (unless you’re using some definition of “try” that is inconsistent with “choose a strategy that has a particular expected result”).
I think that falls under the “ritual of cognition” exception that Eliezer discussed for a while: when Winning depends directly on the ritual of cognition, then of course we can define a situation in which rationality doesn’t Win. But that is perfectly meaningless in every other situation (which is to say, in the world), where the result of the ritual is what matters.
Agents do try to win. The don’t necessarily actually win. For example, if they face a superior opponent. Kasparov was behaving in a highly rational manner in his battle with Deep Blue. He didn’t win. He did try to, though. Thus the distinction between trying to win and actually winning.
I’m not sure how you can implement an admonition to Win and not just to (truly, sincerely) try. What is the empirical difference?
Based on the above, I believe the distinction was between two different kinds of admonitions. I was pointing out that an admonition to win will cause someone to try to win, and an admonition to try will cause someone to try to try.
Right, but again, the topic is the definition of instrumental rationality, and whether it refers to “trying to win” or “actually winning”.
What do “admonitions” have to do with things? Are you arguing that because telling someone to “win” may some positive effect that telling someone to “try to win” lacks that we should define “instrumental rationality” to mean “winning”—and not “trying to win”?
Isn’t that an idiosyncracy of human psychology—which surely ought to have nothing to do with the definition of “instrumental rationality”.
Consider the example of handicap chess. You start with no knight. You try to win. Actually you lose. Were you behaving rationally? I say: you may well have been. Rationality is more about the trying, than it is about the winning.
I consider it to be a probably-true fact about human psychology that if you tell someone to “try” rather than telling them to “win” then that introduces failure possibilites into their mind. That may have a positive effect, if they are naturally over-confident—or a negative one, if they are naturally wracked with self-doubt.
It’s the latter group who buy self-help books: the former group doesn’t think it needs them. So the self-help books tell you to “win”—and not to “try” ;-)
Right, but again, the topic is the definition of instrumental rationality, and whether it refers to “trying to win” or “actually winning”.
What do “admonitions” have to do with things? Are you arguing that because telling someone to “win” may some a positive effect that telling someone to “try to win” lacks that we should define “instrumental rationality” to mean “winning” and not “trying to win”?
Isn’t that an idiosyncracy of human psychology—which surely ought to have nothing to do with the definition of “instrumental rationality”.
Consider the example of handicap chess. You start with no knight. You try to win. Actually you lose. Were you behaving rationally? I say: you may well have been. Rationality is more about the trying, than it is about the winning.
I agree. I’m just noting that an admonition to Win is strictly an admonition to try, phrased more strongly. Winning is not an action—it is a result. All I can suggest are actions that get you to that result.
I can tell you “don’t be satisfied with trying and failing,” but that’s not quite the same.
As for the “Trying-to-try” page—an argument from Yoda and the Force? It reads like something out of a self-help manual!
Sure: if you are trying to inspire confidence in yourself in order to improve your performance, then you might under some circumstances want to think only of winning—and ignore the possibility of trying and failing. But let’s not get our subjects in a muddle, here—the topic is the definition of instrumental rationality, not how some new-age self-help manual might be written.
I’m not sure how you can implement an admonition to Win and not just to (truly, sincerely) try. What is the empirical difference?
I suppose you could use an expected regret measure (that is, the difference between the ideal result and the result of the decision summed across the distribution of probable futures) instead of an expected utility measure.
Expected regret tends to produce more robust strategies than expected utility. For instance, in Newcomb’s problem, we could say that two-boxing comes from expected utility but one-boxing comes from regret-minimizing (since a “failed” two-box gives $1,000,000-$1,000=$999,000 of regret, if you believe Omega would have acted differently if you had been the type of person to one-box, where a “failed” one-box gives $1000-$0=$1,000 of regret).
Using more robust strategies may be a way to more consistently Win, though perhaps the true goal should be to know when to use expected utility and when to use expected regret (and therefore to take advantage both of potential bonanzas and of risk-limiting mechanisms).
I’m quite confident there is only a language difference between eliezer’s description and the point a number of you have just made. Winning versus trying to win are clearly two different things, and it’s also clear that “genuinely trying to win” is the best one can do, based on the definition those in this thread are using. But Eli’s point on ob was that telling oneself “I’m genuinely trying to win” often results in less than genuinely trying. It results in “trying to try”...which means being satisfied by a display of effort rather than utility maximizing. So instead, he arguesn why not say to oneself the imperative “Win!”, where he bakes the “try” part into the implicit imperative. I agree eli’s language usage here may be slightly non standard for most of us (me included) and therefore perhaps misleading to the uninitiated, but I’m doubtful we need to stress about it too much if the facts are as I’ve stated. Does anyone disagree? Perhaps one could argue eli should have to say, “Rational agents should win_eli” and link to an Explanation like this thread, if we are genuinely concerned about people getting confused.
Eliezer seems to be talking about actually winning—e.g.: “Achieving a win is much harder than achieving an expectation of winning”.
He’s been doing this pretty consistently for a while now—including on his administrator’s page on the topic:
“Instrumental rationality: achieving your values.”
http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/
That is why this discussion is still happening.
Here’s a functional difference: Omega says that Box B is empty if you try to win what’s inside it.
Yes! This functional difference is very important!
In Logic, you begin with a set of non-contradicting assumptions and then build a consistent theory based on those assumptions. The deductions you make are analogous to being rational. If the assumptions are non-contradicting, then it is impossible to deduce something false in the system. (Analogously, it is impossible for rationality not to win.) However, you can get a paradox by having a self-referential statement. You can prove that every sufficiently complex theory is not closed—there are things that are true that you can’t prove from within the system. Along the same lines, you can build a paradox by forcing the system to try to talk about it itself.
What Grobstein has presented is a classic paradox and is the closest you can come to rationality not winning.
I understand all that, but I still think it’s impossible to operationalize an admonition to Win. If
then you simply cannot implement a strategy that will give you the proceeds of Box B (unless you’re using some definition of “try” that is inconsistent with “choose a strategy that has a particular expected result”).
I think that falls under the “ritual of cognition” exception that Eliezer discussed for a while: when Winning depends directly on the ritual of cognition, then of course we can define a situation in which rationality doesn’t Win. But that is perfectly meaningless in every other situation (which is to say, in the world), where the result of the ritual is what matters.
Agents do try to win. The don’t necessarily actually win. For example, if they face a superior opponent. Kasparov was behaving in a highly rational manner in his battle with Deep Blue. He didn’t win. He did try to, though. Thus the distinction between trying to win and actually winning.
see http://www.overcomingbias.com/2008/10/trying-to-try.html
It’s really easy to convince yourself that you’ve truly, sincerely tried—trying to try is not nearly as effective as trying to win.
The intended distinction was originally between trying to win and actually winning. You are comparing two kinds of trying.
Based on the above, I believe the distinction was between two different kinds of admonitions. I was pointing out that an admonition to win will cause someone to try to win, and an admonition to try will cause someone to try to try.
Thomblake’s interpretation of my post matches my own.
Right, but again, the topic is the definition of instrumental rationality, and whether it refers to “trying to win” or “actually winning”.
What do “admonitions” have to do with things? Are you arguing that because telling someone to “win” may some positive effect that telling someone to “try to win” lacks that we should define “instrumental rationality” to mean “winning”—and not “trying to win”?
Isn’t that an idiosyncracy of human psychology—which surely ought to have nothing to do with the definition of “instrumental rationality”.
Consider the example of handicap chess. You start with no knight. You try to win. Actually you lose. Were you behaving rationally? I say: you may well have been. Rationality is more about the trying, than it is about the winning.
The question was about admonitions. I commented based on that. I didn’t mean anything further about instrumental rationality.
OK. I don’t think we have a disagreement, then.
I consider it to be a probably-true fact about human psychology that if you tell someone to “try” rather than telling them to “win” then that introduces failure possibilites into their mind. That may have a positive effect, if they are naturally over-confident—or a negative one, if they are naturally wracked with self-doubt.
It’s the latter group who buy self-help books: the former group doesn’t think it needs them. So the self-help books tell you to “win”—and not to “try” ;-)
Right, but again, the topic is the definition of instrumental rationality, and whether it refers to “trying to win” or “actually winning”.
What do “admonitions” have to do with things? Are you arguing that because telling someone to “win” may some a positive effect that telling someone to “try to win” lacks that we should define “instrumental rationality” to mean “winning” and not “trying to win”?
Isn’t that an idiosyncracy of human psychology—which surely ought to have nothing to do with the definition of “instrumental rationality”.
Consider the example of handicap chess. You start with no knight. You try to win. Actually you lose. Were you behaving rationally? I say: you may well have been. Rationality is more about the trying, than it is about the winning.
I agree. I’m just noting that an admonition to Win is strictly an admonition to try, phrased more strongly. Winning is not an action—it is a result. All I can suggest are actions that get you to that result.
I can tell you “don’t be satisfied with trying and failing,” but that’s not quite the same.
As for the “Trying-to-try” page—an argument from Yoda and the Force? It reads like something out of a self-help manual!
Sure: if you are trying to inspire confidence in yourself in order to improve your performance, then you might under some circumstances want to think only of winning—and ignore the possibility of trying and failing. But let’s not get our subjects in a muddle, here—the topic is the definition of instrumental rationality, not how some new-age self-help manual might be written.