Dammit, no. I’ve wasted lots of time arguing against this on OB. You can’t define “rational” as “winning”. “Rational” is an adjective applied to a manner of thinking. Otherwise, you would use the word “winning”. If you say that it’s a definition, what you’re really doing is saying that we can’t criticize people who say “rationalists always win”. But when someone says that rationalists always win, they are making claims about the world. You can derive from that statement expectations about their beliefs about the Prisoner’s Dilemma and the Newcomb Paradox. If it were definitional, you couldn’t make any predictions about their beliefs from their statement.
Based on the original Newcomb Problem post, I would say this statement has a definitional, an empirical, and a normative component, which is what makes it so difficult to unpack. The normative is simple enough: the tools of rationality should be used to steer the future toward regions of higher preference, rather than for their own sake. The definitional component widens the definition of rationality from specific modes of thinking to something more general, like holding true beliefs and updating them in the face of evidence. The empirical claim is that true beliefs and updating, properly applied, will always yield equal or better results in all cases (except when faced with a rationality-punishing deity).
(...Except when faced with a rationality-punishing deity)
And even there, arguably, the true beliefs of “this deity punish rationality” and “this deity uses this algorithm to do so” could lead to applying the right kind of behaviour to avoid said punishment.
Does “axiological” = “axiomatic”?
Dammit, no. I’ve wasted lots of time arguing against this on OB. You can’t define “rational” as “winning”. “Rational” is an adjective applied to a manner of thinking. Otherwise, you would use the word “winning”. If you say that it’s a definition, what you’re really doing is saying that we can’t criticize people who say “rationalists always win”. But when someone says that rationalists always win, they are making claims about the world. You can derive from that statement expectations about their beliefs about the Prisoner’s Dilemma and the Newcomb Paradox. If it were definitional, you couldn’t make any predictions about their beliefs from their statement.
Based on the original Newcomb Problem post, I would say this statement has a definitional, an empirical, and a normative component, which is what makes it so difficult to unpack. The normative is simple enough: the tools of rationality should be used to steer the future toward regions of higher preference, rather than for their own sake. The definitional component widens the definition of rationality from specific modes of thinking to something more general, like holding true beliefs and updating them in the face of evidence. The empirical claim is that true beliefs and updating, properly applied, will always yield equal or better results in all cases (except when faced with a rationality-punishing deity).
And even there, arguably, the true beliefs of “this deity punish rationality” and “this deity uses this algorithm to do so” could lead to applying the right kind of behaviour to avoid said punishment.