I agree, but that one kind is able to determine an optimal response in any universe, except one where no observable event can ever be reliably statistically linked to any other, which seems like it could be a small subset, and not one we’re likely to encounter except
Certainly, there are any number of world-states or day-to-day situations where a full rigorous/sceptical/rational and therefore lengthy investigation would be a sub-optimal response. Instinct works quickly, and if it works well enough, then it’s the best response. But obviously, instinct cannot self-analyze and determine whether and in what cases it works “well enough,” and therefore what factors contribute to it so working, etc. etc.
Passing the problem of a gun jamming the Rationality-Function might return the response, “If the gun doesn’t fire, 90% of the time, pulling the lever action will solve the problem. The other 10% of the time, the gun will blow up in your hand, leading to death. However, determining to reasonable certainty which type of problem you’re experiencing, in the middle of a firefight, will lead to death 90% of the time. Therefore, train your Instinct-Function to pull the lever action 100% of the time, and rely on it rather than me when seconds count.”
Does this sound like what you mean by a “beneficial irrationality”?
Also: I propose that what seems truly beneficial, seems both true and beneficial, and what seems beneficial to the highest degree, seems right. To me, these assertions appear uncontroversial, but you seem to disagree. What about them bothers you, and when will we get to see your article?
“Does this sound like what you mean by a “beneficial irrationality”?”
No. That’s not really what I meant at all. Take nationalism or religion, for example. I think both are based on some false beliefs. However, a belief in one or the other may make a person more willing to sacrifice his well-being for the good of his tribe. This may improve the average chances of survival and reproduction of an individual in the tribe. So members of irrational groups out-compete the rational ones.
In the post above Eliezer is basically lamenting that when people behave rationally, they refuse to act against their self-interest, and damn it, it’s hurting the rational tribe. That’s informative, and sort of my point.
There is some evidence that we have brain structures specialized for religious experience. One would think that these structures could only have evolved if they offered some reproductive benefit to animals becoming self-aware in the land of tooth and claw.
In the harsh world that prevailed up until just the last few centuries, religion provided people comfort. Happy people are less susceptible to disease, more ambitious, and generally more successful. Atheism has always been as true as it is today. However, I wouldn’t recommend it to a 13th century peasant.
“I propose that what seems truly beneficial, seems both true and beneficial, and what seems beneficial to the highest degree, seems right.”
This is not true a priori. That is my point. My challenge to you, Eliezer, and the other denizens of this site is simply: “prove it”.
And I offer this challenge especially to Eliezer. Eliezer, I am calling you out. Justify your optimism in the prudence of truth.
Disprove the parable of Eve and the fruit of the tree of knowledge.
Disprove the parable of Eve and the fruit of the tree of knowledge.
I don’t know ’bout no Eve and fruits, but I do know something about the “god-shaped hole”. It doesn’t actually require religion to fill, although it is commonly associated with religion and religious irrationalities. Essentially, religion is just one way to activate something known as a “core state” in NLP.
Core states are emotional states of peace, oneness, love (in the universal-compassion sense), “being”, or just the sense that “everything is okay”. You could think of them as pure “reward” or “satisfaction” states.
The absence of these states is a compulsive motivator. If someone displays a compulsive social behavior (like needing to correct others’ mistakes, always blurting out unpleasant truths, being a compulsive nonconformist, etc.) it is (in my experience) almost always a direct result of being deprived of one of the core states as a child, and forming a coping response that seems to get them more of the core state, or something related to it.
Showing them how to access the core state directly, however, removes the compulsion altogether. Effectively, it’s like wireheading directly to the core state internally drops the reward/compulsion link to the specific behavior, restoring choice in that area.
Most likely, this is because it’s the unconditional presence of core states that’s the evolutionary advantage you refer to. My guess would be that non-human animals experience these core states as a natural way of being, and that both our increased ability to anticipate negative futures, and our more-complex social requirements and conditions for interpersonal acceptance actually reduce the natural incidence of reaching core states.
Or, to put it more briefly: core states are supposed to be wireheaded, but in humans, a variety of mechanisms conspire to break the wireheading.… and religion is a crutch that reinstates it externally, by exploiting the compulsion mechanism.
Appropriately trained rationalists, on the other hand, can simply reinstate the wireheading internally, and get the benefits without “believing in” anything. (In fact, application of the process tends to surface and extinguish left-over religious ideas from childhood!)
Explaining the actual technique would require considerably more space than I have here, however; the briefest training I’ve done on the subject was over an hour in length, although the technique itself is simple enough to be done in a few minutes. A little googling will find you plenty on the subject, although it’s extremely difficult to learn from the short checklist versions of the technique you’re likely to find on the ’net.
The original book on the subject, Core Transformation, is somewhat better, but it also mixes in a lot of irrelevant stuff based on the outdated “parts” metaphor in NLP—“parts” are just a way of keeping people detached from their responses, and that’s really orthogonal to the primary purpose of the technique, which is really sort of a “stack trace” of active unconscious/emotional goals to uncover the system’s root goal (and thereby access the core state of “pure utility” underneath).
In the harsh world that prevailed up until just the last few centuries, religion provided people comfort. Happy people are less susceptible to disease, more ambitious, and generally more successful. Atheism has always been as true as it is today. However, I wouldn’t recommend it to a 13th century peasant.
Anyone who knows how to access their core states has the ability to call up mystical states of peace, bliss, and what-not, at any moment they actually need or want them. An external idea isn’t necessary to provide comfort—the necessary state already exists inside of you, or religion couldn’t possibly activate it.
“Eliezer is basically lamenting that when people behave rationally, they refuse to act against their self-interest, and damn it, it’s hurting the rational tribe. That’s informative, and sort of my point.”
So if that’s Eliezer’s point, and it’s also your point, what is it that you actually disagree about?
I take Eliezer to be saying that sometimes rational individuals fail to co-operate, but that things needn’t be so. In response, you seem to be asking him to prove that rational individuals must co-operate—when he already appears to have accepted that this isn’t true.
Isn’t the relevant issue whether it is possible for rational individuals to co-operate? Provided we don’t make silly mistakes like equating rationality with self-interest, I don’t see why not—but maybe this whole thread is evidence to the contrary. ;)
My point isn’t exactly clear for a few reasons. First, I was using this post opportunistically to explore a topic that has been on my mind for awhile. Secondly, Eliezer makes statements that sometimes seem to support the “truth = moral good = prudent” assumption, and sometimes not.
He’s provided me with links to some of his past writing, I’ve talked enough, it is time to read and reflect (after I finish a paper for finals).
True, but that “one kind of rationality” might not be what you think it is. Conchis’s point holds if you use “rationality” = “everything should always be taken into account, if possible” or something alike.
A “rational” solution to a problem should always take into account those “but in the real word it doesn’t work like that...”. Those are part of the problem, too.
For example, a political leader acting “rationally” will take into account the opinion of the population (even if they are “wrong” and/or give to much importance to X) if it can affect his results in the next election. The importance of this depends on his “goal” (position of power? well being of the population?) and on the alternative if not elected (will my opponent’s decisions do more harm?).
“Except that we are free to adopt any version of rationality that wins.”
There’s only one kind of rationality.
I agree, but that one kind is able to determine an optimal response in any universe, except one where no observable event can ever be reliably statistically linked to any other, which seems like it could be a small subset, and not one we’re likely to encounter except
Certainly, there are any number of world-states or day-to-day situations where a full rigorous/sceptical/rational and therefore lengthy investigation would be a sub-optimal response. Instinct works quickly, and if it works well enough, then it’s the best response. But obviously, instinct cannot self-analyze and determine whether and in what cases it works “well enough,” and therefore what factors contribute to it so working, etc. etc.
Passing the problem of a gun jamming the Rationality-Function might return the response, “If the gun doesn’t fire, 90% of the time, pulling the lever action will solve the problem. The other 10% of the time, the gun will blow up in your hand, leading to death. However, determining to reasonable certainty which type of problem you’re experiencing, in the middle of a firefight, will lead to death 90% of the time. Therefore, train your Instinct-Function to pull the lever action 100% of the time, and rely on it rather than me when seconds count.”
Does this sound like what you mean by a “beneficial irrationality”?
Also: I propose that what seems truly beneficial, seems both true and beneficial, and what seems beneficial to the highest degree, seems right. To me, these assertions appear uncontroversial, but you seem to disagree. What about them bothers you, and when will we get to see your article?
No. That’s not really what I meant at all. Take nationalism or religion, for example. I think both are based on some false beliefs. However, a belief in one or the other may make a person more willing to sacrifice his well-being for the good of his tribe. This may improve the average chances of survival and reproduction of an individual in the tribe. So members of irrational groups out-compete the rational ones.
In the post above Eliezer is basically lamenting that when people behave rationally, they refuse to act against their self-interest, and damn it, it’s hurting the rational tribe. That’s informative, and sort of my point.
There is some evidence that we have brain structures specialized for religious experience. One would think that these structures could only have evolved if they offered some reproductive benefit to animals becoming self-aware in the land of tooth and claw.
In the harsh world that prevailed up until just the last few centuries, religion provided people comfort. Happy people are less susceptible to disease, more ambitious, and generally more successful. Atheism has always been as true as it is today. However, I wouldn’t recommend it to a 13th century peasant.
This is not true a priori. That is my point. My challenge to you, Eliezer, and the other denizens of this site is simply: “prove it”.
And I offer this challenge especially to Eliezer. Eliezer, I am calling you out. Justify your optimism in the prudence of truth.
Disprove the parable of Eve and the fruit of the tree of knowledge.
I don’t know ’bout no Eve and fruits, but I do know something about the “god-shaped hole”. It doesn’t actually require religion to fill, although it is commonly associated with religion and religious irrationalities. Essentially, religion is just one way to activate something known as a “core state” in NLP.
Core states are emotional states of peace, oneness, love (in the universal-compassion sense), “being”, or just the sense that “everything is okay”. You could think of them as pure “reward” or “satisfaction” states.
The absence of these states is a compulsive motivator. If someone displays a compulsive social behavior (like needing to correct others’ mistakes, always blurting out unpleasant truths, being a compulsive nonconformist, etc.) it is (in my experience) almost always a direct result of being deprived of one of the core states as a child, and forming a coping response that seems to get them more of the core state, or something related to it.
Showing them how to access the core state directly, however, removes the compulsion altogether. Effectively, it’s like wireheading directly to the core state internally drops the reward/compulsion link to the specific behavior, restoring choice in that area.
Most likely, this is because it’s the unconditional presence of core states that’s the evolutionary advantage you refer to. My guess would be that non-human animals experience these core states as a natural way of being, and that both our increased ability to anticipate negative futures, and our more-complex social requirements and conditions for interpersonal acceptance actually reduce the natural incidence of reaching core states.
Or, to put it more briefly: core states are supposed to be wireheaded, but in humans, a variety of mechanisms conspire to break the wireheading.… and religion is a crutch that reinstates it externally, by exploiting the compulsion mechanism.
Appropriately trained rationalists, on the other hand, can simply reinstate the wireheading internally, and get the benefits without “believing in” anything. (In fact, application of the process tends to surface and extinguish left-over religious ideas from childhood!)
Explaining the actual technique would require considerably more space than I have here, however; the briefest training I’ve done on the subject was over an hour in length, although the technique itself is simple enough to be done in a few minutes. A little googling will find you plenty on the subject, although it’s extremely difficult to learn from the short checklist versions of the technique you’re likely to find on the ’net.
The original book on the subject, Core Transformation, is somewhat better, but it also mixes in a lot of irrelevant stuff based on the outdated “parts” metaphor in NLP—“parts” are just a way of keeping people detached from their responses, and that’s really orthogonal to the primary purpose of the technique, which is really sort of a “stack trace” of active unconscious/emotional goals to uncover the system’s root goal (and thereby access the core state of “pure utility” underneath).
Anyone who knows how to access their core states has the ability to call up mystical states of peace, bliss, and what-not, at any moment they actually need or want them. An external idea isn’t necessary to provide comfort—the necessary state already exists inside of you, or religion couldn’t possibly activate it.
Reply here.
“Eliezer is basically lamenting that when people behave rationally, they refuse to act against their self-interest, and damn it, it’s hurting the rational tribe. That’s informative, and sort of my point.”
So if that’s Eliezer’s point, and it’s also your point, what is it that you actually disagree about?
I take Eliezer to be saying that sometimes rational individuals fail to co-operate, but that things needn’t be so. In response, you seem to be asking him to prove that rational individuals must co-operate—when he already appears to have accepted that this isn’t true.
Isn’t the relevant issue whether it is possible for rational individuals to co-operate? Provided we don’t make silly mistakes like equating rationality with self-interest, I don’t see why not—but maybe this whole thread is evidence to the contrary. ;)
My point isn’t exactly clear for a few reasons. First, I was using this post opportunistically to explore a topic that has been on my mind for awhile. Secondly, Eliezer makes statements that sometimes seem to support the “truth = moral good = prudent” assumption, and sometimes not.
He’s provided me with links to some of his past writing, I’ve talked enough, it is time to read and reflect (after I finish a paper for finals).
True, but that “one kind of rationality” might not be what you think it is. Conchis’s point holds if you use “rationality” = “everything should always be taken into account, if possible” or something alike.
A “rational” solution to a problem should always take into account those “but in the real word it doesn’t work like that...”. Those are part of the problem, too.
For example, a political leader acting “rationally” will take into account the opinion of the population (even if they are “wrong” and/or give to much importance to X) if it can affect his results in the next election. The importance of this depends on his “goal” (position of power? well being of the population?) and on the alternative if not elected (will my opponent’s decisions do more harm?).