What a … convenient coincidence … that we live just past the transition where perfect rationality becomes the optimal strategy. Doesn’t it seem a little too convenient?
Yes, actually, I had the same thought myself last night and was about to edit.
Now, as a matter of fact it does seem likely to me that the turnaround was between 1950 and today. This does seem like a “too good to be true” coincidence… suggesting the alternative hypothesis that I am busy rationalizing.
But I’ve reviewed the object-level arguments, and they seem solid.
I think that the explanation for half of the co-incidence is that science is a common cause of rationality arts and the improved standard of living that makes them worth applying. JulianMorrison simultaneously made the same suggestion.
The other half of the co-incidence (why aren’t we alive waaaaay after it becomes rational to be rational) is actually not such a co-incidence… there is nothing interesting to be learned by observing that we’re alive just after technology becomes quite an important factor. (other than the usual doomsday argument/negative singularity stuff which we won’t go into)
EDIT: the possibility that we are rationalizing should cause us to decrease our confidence somewhat in the position stated above. Though, the fact that JulianMorrison thought of the same explanation independently of me should be noted. We should now consider more seriously the alternative hypothesis that “perfect rationality” is still not optimal, and that deliberately deluding ourselves in some significant part of our lives is a good idea. [I stand by my position that there were definitely some periods where rationality was a real downer]. Thanks to gjm and johnicholas for pointing this out.
The post starts out by saying “me too”. This is not helpful.
The post admits that there’s evidence of rationalization, and rather than reducing confidence in the conclusion, it merely reaffirms the original claim.
The post throws very strong anthropic and singularity talk around incautiously and (in my opinion) inexpertly. These are controversial and nearly off-topic ideas, and discussion should admit the controversy and treat them cautiously.
Rationalization is the standard method of generating explanations—first, one acts, then one comes up with a set of reasons why that happened, which are not necessarily causally related to the action. Wish I had a link to the relevant studies handy.
it’s ridiculous to expect a comment to even mention the controversy around the Singularity and the ‘anthropic argument’. If you don’t know what they are, then you can look them up in all their controversial glory. If you do, then you already know they’re controversial. And if something is ‘nearly off-topic’ then by the definition of ‘nearly’ it’s not off-topic, so I’m not sure what your point is there. And to find a singularity-fan in this crowd should not be a surprise to you; it’s another perspective from which to approach the question, and it’s clear that this is the perspective of the poster.
To prevent topic drift while this community blog is being established, please avoid mention of the following topics on Less Wrong until the end of April 2009:
Nick Bostrom’s introduction to the Doomsday Argument is an example of smart, cautious discussion of anthropic reasoning.
You should take the fact that the best argument that you can find for the proposition: “Rationality is optimal now, but it wasn’t in 1950.” is appealing to the Doomsday Argument, as evidence that your brain is in rationalization mode.
But … (and now I’m genuinely curious) why aren’t we living in a period way after rationality became the optimal choice? JulianMorrison and my suggestion provides the lower bound, but what is the upper bound?
To falsify the conjunction “Rationality is optimal now” and “Rationality was not optimal previously”, you only need to falsify one of the conjuncts. For example, “Rationality is not optimal now” or “Rationality was optimal previously”.
EDIT: I said that awkwardly. To change your mind regarding “Rationality is optimal now and rationality was not optimal previously”, you would have to change your mind regarding one of the conjuncts. For example, you could accept the statement “Rationality is not optimal now.”
So, anthropic reasoning involves using facts about how the observer came into being to “explain” certain supposed coincidences and thereby not give so much weight to alternative hypotheses which might need to be invoked to explain the coincidence.
In this case, there is a coincidence between us asserting that rationality is good for us, and us being the first generation out of a long line of humans for whom this is the case. (and, indeed, the same argument applies spatially as temporally; rationality is probably a bad move for many very disadvantaged people in the world today).
The alternative hypothesis under consideration is “rationality is not good for you, you are just rationalizing”.
So, I assume that I am sampled from the set of people who ask the question “is it optimal to be rational, or to delude myself?”. What is the probability of me answering “yes”? Well, JulianMorrison argues (correctly, IMO) that there is a systematic correlation between being able to ask the question and answering “yes”, so the probability is not worryingly small. Nothing unusual has happened here.
So we should not be suspicious that we are rationalizing just because we answered “yes”.
Secondly, what is the probability of me finding myself to be the first (or second) generation of humans for which the answer to this question is “yes”? In the case where there are zillions of similar humans in the future, this probability could be very small. But… there’s no interesting alternative hypothesis to explain this coincidence, so we can’t conclude anything particularly interesting.
Yeah, you’re basically making the doomsday argument. Note that you could use the same reasoning about any question that you expect to come up from time to time, for instance “do I like cheese?”
What a … convenient coincidence … that we live just past the transition where perfect rationality becomes the optimal strategy. Doesn’t it seem a little too convenient?
It does make some sense—there is common cause.
Why are we here on this website? Because science got good.
Why is rationality a personal win? Because science got good.
snap
Yes, actually, I had the same thought myself last night and was about to edit.
Now, as a matter of fact it does seem likely to me that the turnaround was between 1950 and today. This does seem like a “too good to be true” coincidence… suggesting the alternative hypothesis that I am busy rationalizing.
But I’ve reviewed the object-level arguments, and they seem solid.
I think that the explanation for half of the co-incidence is that science is a common cause of rationality arts and the improved standard of living that makes them worth applying. JulianMorrison simultaneously made the same suggestion.
The other half of the co-incidence (why aren’t we alive waaaaay after it becomes rational to be rational) is actually not such a co-incidence… there is nothing interesting to be learned by observing that we’re alive just after technology becomes quite an important factor. (other than the usual doomsday argument/negative singularity stuff which we won’t go into)
EDIT: the possibility that we are rationalizing should cause us to decrease our confidence somewhat in the position stated above. Though, the fact that JulianMorrison thought of the same explanation independently of me should be noted. We should now consider more seriously the alternative hypothesis that “perfect rationality” is still not optimal, and that deliberately deluding ourselves in some significant part of our lives is a good idea. [I stand by my position that there were definitely some periods where rationality was a real downer]. Thanks to gjm and johnicholas for pointing this out.
I voted it down, and this is my reasoning:
The post starts out by saying “me too”. This is not helpful.
The post admits that there’s evidence of rationalization, and rather than reducing confidence in the conclusion, it merely reaffirms the original claim.
The post throws very strong anthropic and singularity talk around incautiously and (in my opinion) inexpertly. These are controversial and nearly off-topic ideas, and discussion should admit the controversy and treat them cautiously.
This point does not seem to degrade the comment.
Rationalization is the standard method of generating explanations—first, one acts, then one comes up with a set of reasons why that happened, which are not necessarily causally related to the action. Wish I had a link to the relevant studies handy.
it’s ridiculous to expect a comment to even mention the controversy around the Singularity and the ‘anthropic argument’. If you don’t know what they are, then you can look them up in all their controversial glory. If you do, then you already know they’re controversial. And if something is ‘nearly off-topic’ then by the definition of ‘nearly’ it’s not off-topic, so I’m not sure what your point is there. And to find a singularity-fan in this crowd should not be a surprise to you; it’s another perspective from which to approach the question, and it’s clear that this is the perspective of the poster.
I’d be interested to hear a more detailed critique of how I used [EDIT] anthropic reasoning
From About Less Wrong:
To prevent topic drift while this community blog is being established, please avoid mention of the following topics on Less Wrong until the end of April 2009:
The Singularity
Artificial General Intelligence
Nick Bostrom’s introduction to the Doomsday Argument is an example of smart, cautious discussion of anthropic reasoning.
You should take the fact that the best argument that you can find for the proposition: “Rationality is optimal now, but it wasn’t in 1950.” is appealing to the Doomsday Argument, as evidence that your brain is in rationalization mode.
But … (and now I’m genuinely curious) why aren’t we living in a period way after rationality became the optimal choice? JulianMorrison and my suggestion provides the lower bound, but what is the upper bound?
To falsify the conjunction “Rationality is optimal now” and “Rationality was not optimal previously”, you only need to falsify one of the conjuncts. For example, “Rationality is not optimal now” or “Rationality was optimal previously”.
EDIT: I said that awkwardly. To change your mind regarding “Rationality is optimal now and rationality was not optimal previously”, you would have to change your mind regarding one of the conjuncts. For example, you could accept the statement “Rationality is not optimal now.”
Robin Hanson has posted on the costs of rationality.
So, anthropic reasoning involves using facts about how the observer came into being to “explain” certain supposed coincidences and thereby not give so much weight to alternative hypotheses which might need to be invoked to explain the coincidence.
In this case, there is a coincidence between us asserting that rationality is good for us, and us being the first generation out of a long line of humans for whom this is the case. (and, indeed, the same argument applies spatially as temporally; rationality is probably a bad move for many very disadvantaged people in the world today).
The alternative hypothesis under consideration is “rationality is not good for you, you are just rationalizing”.
So, I assume that I am sampled from the set of people who ask the question “is it optimal to be rational, or to delude myself?”. What is the probability of me answering “yes”? Well, JulianMorrison argues (correctly, IMO) that there is a systematic correlation between being able to ask the question and answering “yes”, so the probability is not worryingly small. Nothing unusual has happened here.
So we should not be suspicious that we are rationalizing just because we answered “yes”.
Secondly, what is the probability of me finding myself to be the first (or second) generation of humans for which the answer to this question is “yes”? In the case where there are zillions of similar humans in the future, this probability could be very small. But… there’s no interesting alternative hypothesis to explain this coincidence, so we can’t conclude anything particularly interesting.
Yeah, you’re basically making the doomsday argument. Note that you could use the same reasoning about any question that you expect to come up from time to time, for instance “do I like cheese?”
Correct. I’ve edited my comment since you commented. Read the corrected version and critique…
Please reread my post. I think I was editing while you were reading my post.
Are you asking an explanation for why anthropic reasoning is bunk?
I’d love to know why someone downvoted this...
(this comment also downvoted to 0)
ROFL someone is out to get me, I can see ;-0
Aaah! negative karma! Everybody hates me! I’m considering killing myself if this comment gets downvoted any more…