There is no guarantee of a benevolent world, Eliezer. There is no guarantee that what is true is also beneficial. There is no guarantee that what is beneficial for an individual is also beneficial for a group.
You conflate many things here. You conflate what is true with what is right and what is beneficial. You assume that these sets are identical, or at least largely overlapping. However, unless a galactic overlord designed the universe to please homo sapien rationalists, I don’t see any compelling rational reason to believe this to be the case.
Irrational belief systems often thrive because they overcome the prisoner dilemmas that individual rational action creates on a group level. Rational people cannot mimic this. The prisoners dilemma and the tragedy of the commons are not new ideas. Telling people to act in the group interest because God said so is effective. It is easy to see how informing people of the costs of action, because truth is noble and people ought not be lied to, can be counter-effective.
Perhaps we should stop striving for the maximum rational society, and start pursuing the maximum rational society which is stable in the long term. That is, maybe we ought to set our goal to minimizing irrationality, recognizing that we will never eliminate it.
If we cannot purposely introduce a small bit of beneficial irrationality into our group, then fine: memetic evolution will weed us out and there is nothing we can do about it. People will march by the millions to the will of saints and emperors while rational causes whither on the vine. Not much will change.
Robin made an excellent post along similar lines, which captures half of what I want to say:
Sorry, I can’t find the motivation to jump on the non-critical bandwagon today. I had the idea about a week ago that there is no guarantee that truth= justice = prudence, and that is going to be the hobby-horse I ride until I get a good statement of my position out, or read one by someone else.
I one-box on Newcomb’s Problem, cooperate in the Prisoner’s Dilemma against a similar decision system, and even if neither of these were the case: life is iterated and it is not hard to think of enforcement mechanisms, and human utility functions have terms in them for other humans. You conflate rationality with selfishness, assume rationalists cannot build group coordination mechanisms, and toss in a bit of group selection to boot. These and the referenced links complete my disagreement.
Thanks for the links, your corpus of writing can be hard to keep up with. I don’t mean this as a criticism, I just mean to say that you are prolific, which makes it hard on a reader, because you must strike a balance between reiterating old points and exploring new ideas. I appreciate the attention.
Also, did you ever reply to the Robin post I linked to above? Robin is a more capable defender of an idea than I am, so I would be intrigued to follow the dialog.
If you are rational enough, perceptive enough and EY’s writing is consistant enough at some point you will not have to read everything EY writes to have a pretty good idea of what his views on a matter will be. I would bet a good some of money that EY would prefer to have his reader gain this ability then read all of his writings.
“However, unless a galactic overlord designed the universe to please homo sapien rationalists, I don’t see any compelling rational reason to believe this to be the case.”
Except that we are free to adopt any version of rationality that wins. Rationality should be responsive to a given universe design, not the other way around.
“Irrational belief systems often thrive because they overcome the prisoner dilemmas that individual rational action creates on a group level. Rational people cannot mimic this.”
Really? Most of the “individual rationality → suboptimal outcomes” results assume that actors have no influence over the structure of the games they are playing. This doesn’t reflect reality particularly well. We may not have infinite flexibility here, but changing the structure of the game is often quite feasible, and quite effective.
For example, we could establish a social norm that compulsive public disagreement is a shameful personal habit, and that you can’t be even remotely considered “formidable” if you haven’t gotten rid of the urge to seek status by pulling down others.
However, unless a galactic overlord designed the universe to please homo sapien rationalists, I don’t see any compelling rational reason to believe this to be the case.
Except that we are free to adopt any version of rationality that wins. Rationality should be responsive to a given universe design, not the other way around.
I don’t think your argument applies to jacoblytes’ argument. Jacoblytes claims that there is no reason for “rational” to equal “(morally/ethically) right”, unless an intelligent designer designed the universe in line with our values.
So it’s not about winning versus losing. It’s that unless the rules of the game are set up just in a certain way, then winning may entail causing suffering to others (e.g. to our rivals).
My writing in these comments has not been perfectly clear, but Nebu you have nailed one point that I was trying to make: “there is no guarantee that morally good actions are beneficial”.
The Christian morality is interesting, here. Christians admit up front that following their religion may lead to persecution and suffering. Their God was tortured and killed, after all. They don’t claim that what is good will be pleasant, as the rationalists do. To that degree, the Christians seem more honest and open-minded. Perhaps this is just a function of Christianity being an old religion and having the time to work out the philosophical kinks.
Of course, they make up for it by offering infinite bliss in the next life, which is cheating. But Christians do have a more honest view of this world in some ways.
Maybe we conflate true, good, and prudent because our “religion” is a hard sell otherwise. If we admitted that true and morally right things may be harmful, our pitch would become “Believe the truth, do what is good, and you may become miserable. There is no guarantee that our philosophy will help you in this life, and there is no next life”. That’s a hard sell. So we rationalists cheat by not examining this possibility.
There is some truth to the Christian criticism that Atheists are closed-minded and biased, too.
“Except that we are free to adopt any version of rationality that wins. ”
In that case, believing in truth is often non-rational.
Many people on this site have bemoaned the confusing dual meanings of “rational” (the economic utility maximizing definition and the epistemological believing in truth definition). Allow me to add my name to that list.
I believe I consistently used the “believing in truth” definition of rational in the parent post.
I agree that the multiple definitions are confusing, but I’m not sure that you consistently employ the “believing in truth” version in your post above.* It’s not “believing in truth” that gets people into prisoners’ dilemmas; it’s trying to win.
*And if you did, I suspect you’d be responding to a point that Eliezer wasn’t making, given that he’s been pretty clear on his favored definition being the “winning” one. But I could easily be the one confused on that. ;)
“In that case, believing in truth is often non-rational.”
Fair enough. Though I wonder whether, in most of the instances where that seems to be true, it’s true for second-best reasons. (That is, if we were “better” in other (potentially modifiable) ways, the truth wouldn’t be so harmful.)
I agree, but that one kind is able to determine an optimal response in any universe, except one where no observable event can ever be reliably statistically linked to any other, which seems like it could be a small subset, and not one we’re likely to encounter except
Certainly, there are any number of world-states or day-to-day situations where a full rigorous/sceptical/rational and therefore lengthy investigation would be a sub-optimal response. Instinct works quickly, and if it works well enough, then it’s the best response. But obviously, instinct cannot self-analyze and determine whether and in what cases it works “well enough,” and therefore what factors contribute to it so working, etc. etc.
Passing the problem of a gun jamming the Rationality-Function might return the response, “If the gun doesn’t fire, 90% of the time, pulling the lever action will solve the problem. The other 10% of the time, the gun will blow up in your hand, leading to death. However, determining to reasonable certainty which type of problem you’re experiencing, in the middle of a firefight, will lead to death 90% of the time. Therefore, train your Instinct-Function to pull the lever action 100% of the time, and rely on it rather than me when seconds count.”
Does this sound like what you mean by a “beneficial irrationality”?
Also: I propose that what seems truly beneficial, seems both true and beneficial, and what seems beneficial to the highest degree, seems right. To me, these assertions appear uncontroversial, but you seem to disagree. What about them bothers you, and when will we get to see your article?
“Does this sound like what you mean by a “beneficial irrationality”?”
No. That’s not really what I meant at all. Take nationalism or religion, for example. I think both are based on some false beliefs. However, a belief in one or the other may make a person more willing to sacrifice his well-being for the good of his tribe. This may improve the average chances of survival and reproduction of an individual in the tribe. So members of irrational groups out-compete the rational ones.
In the post above Eliezer is basically lamenting that when people behave rationally, they refuse to act against their self-interest, and damn it, it’s hurting the rational tribe. That’s informative, and sort of my point.
There is some evidence that we have brain structures specialized for religious experience. One would think that these structures could only have evolved if they offered some reproductive benefit to animals becoming self-aware in the land of tooth and claw.
In the harsh world that prevailed up until just the last few centuries, religion provided people comfort. Happy people are less susceptible to disease, more ambitious, and generally more successful. Atheism has always been as true as it is today. However, I wouldn’t recommend it to a 13th century peasant.
“I propose that what seems truly beneficial, seems both true and beneficial, and what seems beneficial to the highest degree, seems right.”
This is not true a priori. That is my point. My challenge to you, Eliezer, and the other denizens of this site is simply: “prove it”.
And I offer this challenge especially to Eliezer. Eliezer, I am calling you out. Justify your optimism in the prudence of truth.
Disprove the parable of Eve and the fruit of the tree of knowledge.
Disprove the parable of Eve and the fruit of the tree of knowledge.
I don’t know ’bout no Eve and fruits, but I do know something about the “god-shaped hole”. It doesn’t actually require religion to fill, although it is commonly associated with religion and religious irrationalities. Essentially, religion is just one way to activate something known as a “core state” in NLP.
Core states are emotional states of peace, oneness, love (in the universal-compassion sense), “being”, or just the sense that “everything is okay”. You could think of them as pure “reward” or “satisfaction” states.
The absence of these states is a compulsive motivator. If someone displays a compulsive social behavior (like needing to correct others’ mistakes, always blurting out unpleasant truths, being a compulsive nonconformist, etc.) it is (in my experience) almost always a direct result of being deprived of one of the core states as a child, and forming a coping response that seems to get them more of the core state, or something related to it.
Showing them how to access the core state directly, however, removes the compulsion altogether. Effectively, it’s like wireheading directly to the core state internally drops the reward/compulsion link to the specific behavior, restoring choice in that area.
Most likely, this is because it’s the unconditional presence of core states that’s the evolutionary advantage you refer to. My guess would be that non-human animals experience these core states as a natural way of being, and that both our increased ability to anticipate negative futures, and our more-complex social requirements and conditions for interpersonal acceptance actually reduce the natural incidence of reaching core states.
Or, to put it more briefly: core states are supposed to be wireheaded, but in humans, a variety of mechanisms conspire to break the wireheading.… and religion is a crutch that reinstates it externally, by exploiting the compulsion mechanism.
Appropriately trained rationalists, on the other hand, can simply reinstate the wireheading internally, and get the benefits without “believing in” anything. (In fact, application of the process tends to surface and extinguish left-over religious ideas from childhood!)
Explaining the actual technique would require considerably more space than I have here, however; the briefest training I’ve done on the subject was over an hour in length, although the technique itself is simple enough to be done in a few minutes. A little googling will find you plenty on the subject, although it’s extremely difficult to learn from the short checklist versions of the technique you’re likely to find on the ’net.
The original book on the subject, Core Transformation, is somewhat better, but it also mixes in a lot of irrelevant stuff based on the outdated “parts” metaphor in NLP—“parts” are just a way of keeping people detached from their responses, and that’s really orthogonal to the primary purpose of the technique, which is really sort of a “stack trace” of active unconscious/emotional goals to uncover the system’s root goal (and thereby access the core state of “pure utility” underneath).
In the harsh world that prevailed up until just the last few centuries, religion provided people comfort. Happy people are less susceptible to disease, more ambitious, and generally more successful. Atheism has always been as true as it is today. However, I wouldn’t recommend it to a 13th century peasant.
Anyone who knows how to access their core states has the ability to call up mystical states of peace, bliss, and what-not, at any moment they actually need or want them. An external idea isn’t necessary to provide comfort—the necessary state already exists inside of you, or religion couldn’t possibly activate it.
“Eliezer is basically lamenting that when people behave rationally, they refuse to act against their self-interest, and damn it, it’s hurting the rational tribe. That’s informative, and sort of my point.”
So if that’s Eliezer’s point, and it’s also your point, what is it that you actually disagree about?
I take Eliezer to be saying that sometimes rational individuals fail to co-operate, but that things needn’t be so. In response, you seem to be asking him to prove that rational individuals must co-operate—when he already appears to have accepted that this isn’t true.
Isn’t the relevant issue whether it is possible for rational individuals to co-operate? Provided we don’t make silly mistakes like equating rationality with self-interest, I don’t see why not—but maybe this whole thread is evidence to the contrary. ;)
My point isn’t exactly clear for a few reasons. First, I was using this post opportunistically to explore a topic that has been on my mind for awhile. Secondly, Eliezer makes statements that sometimes seem to support the “truth = moral good = prudent” assumption, and sometimes not.
He’s provided me with links to some of his past writing, I’ve talked enough, it is time to read and reflect (after I finish a paper for finals).
True, but that “one kind of rationality” might not be what you think it is. Conchis’s point holds if you use “rationality” = “everything should always be taken into account, if possible” or something alike.
A “rational” solution to a problem should always take into account those “but in the real word it doesn’t work like that...”. Those are part of the problem, too.
For example, a political leader acting “rationally” will take into account the opinion of the population (even if they are “wrong” and/or give to much importance to X) if it can affect his results in the next election. The importance of this depends on his “goal” (position of power? well being of the population?) and on the alternative if not elected (will my opponent’s decisions do more harm?).
There is no guarantee of a benevolent world, Eliezer. There is no guarantee that what is true is also beneficial. There is no guarantee that what is beneficial for an individual is also beneficial for a group.
You conflate many things here. You conflate what is true with what is right and what is beneficial. You assume that these sets are identical, or at least largely overlapping. However, unless a galactic overlord designed the universe to please homo sapien rationalists, I don’t see any compelling rational reason to believe this to be the case.
Irrational belief systems often thrive because they overcome the prisoner dilemmas that individual rational action creates on a group level. Rational people cannot mimic this. The prisoners dilemma and the tragedy of the commons are not new ideas. Telling people to act in the group interest because God said so is effective. It is easy to see how informing people of the costs of action, because truth is noble and people ought not be lied to, can be counter-effective.
Perhaps we should stop striving for the maximum rational society, and start pursuing the maximum rational society which is stable in the long term. That is, maybe we ought to set our goal to minimizing irrationality, recognizing that we will never eliminate it.
If we cannot purposely introduce a small bit of beneficial irrationality into our group, then fine: memetic evolution will weed us out and there is nothing we can do about it. People will march by the millions to the will of saints and emperors while rational causes whither on the vine. Not much will change.
Robin made an excellent post along similar lines, which captures half of what I want to say:
http://lesswrong.com/lw/j/the_costs_of_rationality/
I’ll be writing up the rest of my thoughts soon.
Sorry, I can’t find the motivation to jump on the non-critical bandwagon today. I had the idea about a week ago that there is no guarantee that truth= justice = prudence, and that is going to be the hobby-horse I ride until I get a good statement of my position out, or read one by someone else.
I one-box on Newcomb’s Problem, cooperate in the Prisoner’s Dilemma against a similar decision system, and even if neither of these were the case: life is iterated and it is not hard to think of enforcement mechanisms, and human utility functions have terms in them for other humans. You conflate rationality with selfishness, assume rationalists cannot build group coordination mechanisms, and toss in a bit of group selection to boot. These and the referenced links complete my disagreement.
Thanks for the links, your corpus of writing can be hard to keep up with. I don’t mean this as a criticism, I just mean to say that you are prolific, which makes it hard on a reader, because you must strike a balance between reiterating old points and exploring new ideas. I appreciate the attention.
Also, did you ever reply to the Robin post I linked to above? Robin is a more capable defender of an idea than I am, so I would be intrigued to follow the dialog.
If you are rational enough, perceptive enough and EY’s writing is consistant enough at some point you will not have to read everything EY writes to have a pretty good idea of what his views on a matter will be. I would bet a good some of money that EY would prefer to have his reader gain this ability then read all of his writings.
“However, unless a galactic overlord designed the universe to please homo sapien rationalists, I don’t see any compelling rational reason to believe this to be the case.”
Except that we are free to adopt any version of rationality that wins. Rationality should be responsive to a given universe design, not the other way around.
“Irrational belief systems often thrive because they overcome the prisoner dilemmas that individual rational action creates on a group level. Rational people cannot mimic this.”
Really? Most of the “individual rationality → suboptimal outcomes” results assume that actors have no influence over the structure of the games they are playing. This doesn’t reflect reality particularly well. We may not have infinite flexibility here, but changing the structure of the game is often quite feasible, and quite effective.
For example, we could establish a social norm that compulsive public disagreement is a shameful personal habit, and that you can’t be even remotely considered “formidable” if you haven’t gotten rid of the urge to seek status by pulling down others.
I disagree.
I don’t think your argument applies to jacoblytes’ argument. Jacoblytes claims that there is no reason for “rational” to equal “(morally/ethically) right”, unless an intelligent designer designed the universe in line with our values.
So it’s not about winning versus losing. It’s that unless the rules of the game are set up just in a certain way, then winning may entail causing suffering to others (e.g. to our rivals).
My writing in these comments has not been perfectly clear, but Nebu you have nailed one point that I was trying to make: “there is no guarantee that morally good actions are beneficial”.
The Christian morality is interesting, here. Christians admit up front that following their religion may lead to persecution and suffering. Their God was tortured and killed, after all. They don’t claim that what is good will be pleasant, as the rationalists do. To that degree, the Christians seem more honest and open-minded. Perhaps this is just a function of Christianity being an old religion and having the time to work out the philosophical kinks.
Of course, they make up for it by offering infinite bliss in the next life, which is cheating. But Christians do have a more honest view of this world in some ways.
Maybe we conflate true, good, and prudent because our “religion” is a hard sell otherwise. If we admitted that true and morally right things may be harmful, our pitch would become “Believe the truth, do what is good, and you may become miserable. There is no guarantee that our philosophy will help you in this life, and there is no next life”. That’s a hard sell. So we rationalists cheat by not examining this possibility.
There is some truth to the Christian criticism that Atheists are closed-minded and biased, too.
In that case, believing in truth is often non-rational.
Many people on this site have bemoaned the confusing dual meanings of “rational” (the economic utility maximizing definition and the epistemological believing in truth definition). Allow me to add my name to that list.
I believe I consistently used the “believing in truth” definition of rational in the parent post.
I agree that the multiple definitions are confusing, but I’m not sure that you consistently employ the “believing in truth” version in your post above.* It’s not “believing in truth” that gets people into prisoners’ dilemmas; it’s trying to win.
*And if you did, I suspect you’d be responding to a point that Eliezer wasn’t making, given that he’s been pretty clear on his favored definition being the “winning” one. But I could easily be the one confused on that. ;)
“In that case, believing in truth is often non-rational.”
Fair enough. Though I wonder whether, in most of the instances where that seems to be true, it’s true for second-best reasons. (That is, if we were “better” in other (potentially modifiable) ways, the truth wouldn’t be so harmful.)
“Except that we are free to adopt any version of rationality that wins.”
There’s only one kind of rationality.
I agree, but that one kind is able to determine an optimal response in any universe, except one where no observable event can ever be reliably statistically linked to any other, which seems like it could be a small subset, and not one we’re likely to encounter except
Certainly, there are any number of world-states or day-to-day situations where a full rigorous/sceptical/rational and therefore lengthy investigation would be a sub-optimal response. Instinct works quickly, and if it works well enough, then it’s the best response. But obviously, instinct cannot self-analyze and determine whether and in what cases it works “well enough,” and therefore what factors contribute to it so working, etc. etc.
Passing the problem of a gun jamming the Rationality-Function might return the response, “If the gun doesn’t fire, 90% of the time, pulling the lever action will solve the problem. The other 10% of the time, the gun will blow up in your hand, leading to death. However, determining to reasonable certainty which type of problem you’re experiencing, in the middle of a firefight, will lead to death 90% of the time. Therefore, train your Instinct-Function to pull the lever action 100% of the time, and rely on it rather than me when seconds count.”
Does this sound like what you mean by a “beneficial irrationality”?
Also: I propose that what seems truly beneficial, seems both true and beneficial, and what seems beneficial to the highest degree, seems right. To me, these assertions appear uncontroversial, but you seem to disagree. What about them bothers you, and when will we get to see your article?
No. That’s not really what I meant at all. Take nationalism or religion, for example. I think both are based on some false beliefs. However, a belief in one or the other may make a person more willing to sacrifice his well-being for the good of his tribe. This may improve the average chances of survival and reproduction of an individual in the tribe. So members of irrational groups out-compete the rational ones.
In the post above Eliezer is basically lamenting that when people behave rationally, they refuse to act against their self-interest, and damn it, it’s hurting the rational tribe. That’s informative, and sort of my point.
There is some evidence that we have brain structures specialized for religious experience. One would think that these structures could only have evolved if they offered some reproductive benefit to animals becoming self-aware in the land of tooth and claw.
In the harsh world that prevailed up until just the last few centuries, religion provided people comfort. Happy people are less susceptible to disease, more ambitious, and generally more successful. Atheism has always been as true as it is today. However, I wouldn’t recommend it to a 13th century peasant.
This is not true a priori. That is my point. My challenge to you, Eliezer, and the other denizens of this site is simply: “prove it”.
And I offer this challenge especially to Eliezer. Eliezer, I am calling you out. Justify your optimism in the prudence of truth.
Disprove the parable of Eve and the fruit of the tree of knowledge.
I don’t know ’bout no Eve and fruits, but I do know something about the “god-shaped hole”. It doesn’t actually require religion to fill, although it is commonly associated with religion and religious irrationalities. Essentially, religion is just one way to activate something known as a “core state” in NLP.
Core states are emotional states of peace, oneness, love (in the universal-compassion sense), “being”, or just the sense that “everything is okay”. You could think of them as pure “reward” or “satisfaction” states.
The absence of these states is a compulsive motivator. If someone displays a compulsive social behavior (like needing to correct others’ mistakes, always blurting out unpleasant truths, being a compulsive nonconformist, etc.) it is (in my experience) almost always a direct result of being deprived of one of the core states as a child, and forming a coping response that seems to get them more of the core state, or something related to it.
Showing them how to access the core state directly, however, removes the compulsion altogether. Effectively, it’s like wireheading directly to the core state internally drops the reward/compulsion link to the specific behavior, restoring choice in that area.
Most likely, this is because it’s the unconditional presence of core states that’s the evolutionary advantage you refer to. My guess would be that non-human animals experience these core states as a natural way of being, and that both our increased ability to anticipate negative futures, and our more-complex social requirements and conditions for interpersonal acceptance actually reduce the natural incidence of reaching core states.
Or, to put it more briefly: core states are supposed to be wireheaded, but in humans, a variety of mechanisms conspire to break the wireheading.… and religion is a crutch that reinstates it externally, by exploiting the compulsion mechanism.
Appropriately trained rationalists, on the other hand, can simply reinstate the wireheading internally, and get the benefits without “believing in” anything. (In fact, application of the process tends to surface and extinguish left-over religious ideas from childhood!)
Explaining the actual technique would require considerably more space than I have here, however; the briefest training I’ve done on the subject was over an hour in length, although the technique itself is simple enough to be done in a few minutes. A little googling will find you plenty on the subject, although it’s extremely difficult to learn from the short checklist versions of the technique you’re likely to find on the ’net.
The original book on the subject, Core Transformation, is somewhat better, but it also mixes in a lot of irrelevant stuff based on the outdated “parts” metaphor in NLP—“parts” are just a way of keeping people detached from their responses, and that’s really orthogonal to the primary purpose of the technique, which is really sort of a “stack trace” of active unconscious/emotional goals to uncover the system’s root goal (and thereby access the core state of “pure utility” underneath).
Anyone who knows how to access their core states has the ability to call up mystical states of peace, bliss, and what-not, at any moment they actually need or want them. An external idea isn’t necessary to provide comfort—the necessary state already exists inside of you, or religion couldn’t possibly activate it.
Reply here.
“Eliezer is basically lamenting that when people behave rationally, they refuse to act against their self-interest, and damn it, it’s hurting the rational tribe. That’s informative, and sort of my point.”
So if that’s Eliezer’s point, and it’s also your point, what is it that you actually disagree about?
I take Eliezer to be saying that sometimes rational individuals fail to co-operate, but that things needn’t be so. In response, you seem to be asking him to prove that rational individuals must co-operate—when he already appears to have accepted that this isn’t true.
Isn’t the relevant issue whether it is possible for rational individuals to co-operate? Provided we don’t make silly mistakes like equating rationality with self-interest, I don’t see why not—but maybe this whole thread is evidence to the contrary. ;)
My point isn’t exactly clear for a few reasons. First, I was using this post opportunistically to explore a topic that has been on my mind for awhile. Secondly, Eliezer makes statements that sometimes seem to support the “truth = moral good = prudent” assumption, and sometimes not.
He’s provided me with links to some of his past writing, I’ve talked enough, it is time to read and reflect (after I finish a paper for finals).
True, but that “one kind of rationality” might not be what you think it is. Conchis’s point holds if you use “rationality” = “everything should always be taken into account, if possible” or something alike.
A “rational” solution to a problem should always take into account those “but in the real word it doesn’t work like that...”. Those are part of the problem, too.
For example, a political leader acting “rationally” will take into account the opinion of the population (even if they are “wrong” and/or give to much importance to X) if it can affect his results in the next election. The importance of this depends on his “goal” (position of power? well being of the population?) and on the alternative if not elected (will my opponent’s decisions do more harm?).