Stupidity is the lack of mental horsepower. A stupid person has a weak or inefficient “cognitive CPU”.
Craziness is when the output of the “program” doesn’t correlate reliably with reality due to bugs in the “source code”. A crazy person has a flawed “cognitive algorithm”.
It seems that in humans, source code can be revised to a certain degree, but processing power is difficult (though not impossible) to upgrade.
So calling someone crazy (for the time being) is certainly different from calling someone stupid.
Wrongness is when the output of the “program” doesn’t correlate reliably with reality. But this could happen not only because the algorithm is flawed (wrong because crazy), but also because of insufficient or incorrect input. I think this is an important distinction, because the person can be smart (non-stupid) and rational (non-irrational = non-crazy) but still wrong nevertheless — and those around would call him “crazy” or “stupid” undeservedly.
Example: CEOs taking calculated risks but being fired because the company, guided by him, flipped the coin and got head instead of the desired tail. Stakeholders expected him to be omniscient.
Those CEOs who get it right will be perceived as omniscient gurus. Hindsight bias will make them write books on how to be successful; survivorship bias will lure people into buying them.
Not being crazy makes your output less wrong. But doesn’t guarantee it to be right, either.
If I didn’t get it wrong in my analysis above (puns intended), would it be fair to say that this community, having the mission to fix the biases in our algorithms, should be even more appropriately called Less Crazy instead?
I am not sure this type of “craziness” itself is always a bug.
Irrational beliefs and behaviors often have perfectly rational explanations that make sense from a mental health point of view: humans are more emotional then logical creatures. Internally coping with (often unconscious) emotional problems can be a higher priority personal task than correlating with reality in every possible respect.
An emotion that doesn’t correlate with reality is itself a bug. Sure, it may not be easy to fix (or even possible without brain-hacking), but it’s a bug in the human source code nonetheless.
To extend the analogy, it’s like a bug in the operating system. If that low-level bug causes a higher-level program to malfunction, you can still blame “buggy code” even if the higher-level program itself is bug-free.
If you would design a system with optimal resource usage for certain operating conditions, then you do not consider a failure outside operating conditions a bug. You can always make the system more and more reliable at the expense of higher resource usage, but even in human engineered systems, over-design is considered to be a mistake.
I don’t want to argue that the brain is an optimal trade-off in this sense, only that it is extremely hard to tell the genuine bugs from the fixes with some strange side-effects. Maybe the question itself is meaningless.
I am rather surprised by the fact that although the human brain was not evolved to be an abstract theorem prover but the controller of a procreation machine, it still performs remarkably well in quite a few logical and rational domains.
I suppose you’re saying that when a useful heuristic (allowing real-time approximate solutions to computationally hard problems) leads to biases in edge cases, it shouldn’t be considered a bug because the trade-off is necessary for survival in a fast-paced world.
I might disagree, but then we’d just be bickering about which labels to use within the analogy, which hardly seems useful. I suppose that instead of using the word “bug” for such situations, we could say that an imprecise algorithm is necessary because of a “hardware limitation” of the brain.
However, so long as there are more precise algorithms that can run on the same hardware (debiasing techniques), I would still consider the inferior algorithm to be “craziness”.
An emotion that doesn’t correlate with reality is itself a bug.
Even if it’s advantageous to the agent’s goals (not evolutionary fitness)? Emotions don’t have XML tags that say “this should map to reality in the following way”.
My response was to Christian’s implication that a rationality program isn’t necessarily buggy for outputting irrational behaviors because it must account for human emotions. My point was that human emotions are part of the human rationality program (whether we can edit our source code or not) and that if they cause an otherwise bug-free rationality program to output irrational behaviors, then the emotions themselves are the bugs.
In your response, you asked about emotions that produce behaviors advantageous to the agent’s goals, which is rational behavior, not irrational behavior as was stipulated in Christian’s post.
If those emotions are part of an otherwise bug-free rationality program that outputs rational beliefs and behaviors, then there is no bug. And that’s what it means for an emotion to be correlated with reality, precisely because there are no XML tags mapping certain neural spike patterns (i.e. emotions) to the state of reality.
Emotions aren’t beliefs about the world that can be verified by looking at the territory. Emotions are threads within the running program that maps and traverses the territory, so the only thing it can mean for them to correlate with reality is that they don’t cause the program to malfunction.
What I was trying to point out to Christian is that emotions are part of the system, not outside of it. So if the system produces irrational behavior, then the system as a whole is irrational, even if some of the subroutines are rational in isolation.
The irrationality of the emotions don’t somehow cancel out with the irrationality of the outputs to make the whole system rational.
A male having a higher opinion of himself (pride) than he realistically deserves may prove evolutionarily advantageous. If this disconnect from reality improves reproductive fitness then it can’t be considered a bug.
To use your analogy. Any person who doesn’t provide the expected output is often deemed crazy… It doesn’t mean that there is a bug in the person, perhaps sometimes it’s a bug in reality.
I’ve talked to a number of people who most would call crazy (none of them went to the mad house—at least that I know of). When you begin to look at things from their perspective you sometimes find that they see patterns others are missing; but lack the social graces and unique way or inability to relate those patterns to others is lost.
On the other hand, I think that we are all “crazy” and “stupid” in our own ways. I think there are really extreme cases of both.
To use your analogy. Any person who doesn’t provide the expected output is often deemed crazy… It doesn’t mean that there is a bug in the person, perhaps sometimes it’s a bug in reality.
In the context of my analogy, it’s nonsense to say that reality can have bugs.
I suppose you meant that sometimes the majority of people can share the same bug, which causes them to “deem” that someone who lacks the bug (and outputs accordingly) is crazy.
But there’s still an actual territory that each program either does or does not map properly, regardless of society’s current most popular map. So it’s meaningful to define “craziness” in terms of the actual territory, even if it’s occassionaly difficult to determine whether 1 person is crazy or “everyone else” is.
I suppose what I was referring to is a spec bug; the bug is in expecting the wrong (accepted by society) output. Not an actual “the universe hiccuped and needs to be rebooted.” The reason for the spec bug might not be a shared bug, but programs operating on different inputs. For instance, Tesla… Anyone who knew Tesla described him as an odd man, and a little crazy. At the same time, he purposefully filled his input buffer with the latest research on electricity and purposefully processed that data differently than his peers in the field. He didn’t spend much time accumulating input on proper social behavior, or on how others would judge him on the streets. It is seen as a crazy thing to do, to pick up wounded pidgins on the street, take them home and nurse them back to health. Because the spec of the time (norms of society) say it was odd to do.
An old friend of mine who I haven’t seen in years is an artist. He’s a creative minded person who thinks that rationality would tie his hands too much. That said, when I was younger it surprised me the types of puzzles he was able to solve because he’d try the thing that seemed irrational.
Stupidity means having less knowledge and/or fewer reasoning processes to work with, and so reaching fewer and/or less complicated conclusions. Craziness means reaching incorrect conclusions, due to incorrect knowledge or processes.
Crazy people refuse to follow some relatively straightforward procedure that allows to achieve their goals or prevent terrible disutility. Lazy people want to follow the overall procedure, but can’t manage to perform particular steps. Stupid people may be incapable of following the procedure or even of learning about its existence.
Crazy people refuse to follow some relatively straightforward procedure that allows to achieve their goals or prevent terrible disutility.
This probably means that they failed to recognize that the procedure is straightfoward and would allow them to achieve their goals.
This, in turn, probably means that they failed to apply whichever 2nd-order procedure would demonstrate that the first procedure was so straightforward and surefire.
Why aren’t they stupid for their inability to apply this 2nd-order procedure (or 3rd-order or however many it takes to bottom out)? Is the claim that, at some point, the nth-order procedure for establishing the straightforwardness and surefire-ness of the lower-order procedures is not itself straightforward and surefire?
The fact that you have to follow n-th order procedure of debiasing yourself is nontrivial, so not knowing that it’s important doesn’t identify people as stupid, but still leads to them remaining crazy.
success at practical projects that involve planning over several tasks related by a dependence graph, possibly branching or looping
talent in some particular domain (e.g. writing fiction, sleuthing, managing people, math, word puzzles)
skill at memorization, accumulation of theoretical knowledge
adopting behaviors which enhance or preserve your long-term prospects
Someone lacking the first would be called stupid, someone lacking the last would be called crazy (though we also use the term for mental illnesses, which are a different thing again).
Intelligence seems to be at least somewhat modular (idiot savants being the canonical demonstration). I wonder if there is some classification of biases by which parts of intelligence are affected.
Stupid is when you are unable to solve a problem. Lazy is when you are able to solve a problem but don’t care to. Crazy is when you are able to solve a problem but don’t want to.
That is not the sense of “crazy” that Eliezer is using. Maybe you could say that crazy is when you think that you have a solution but you don’t (ETA: and you ought to be able to see that). But that seems like a special case of stupid.
Are you sure? It seems to me that having an intellectual problem that you are capable of solving but are unwilling to update on due to ideological reasons or otherwise (eg Aumann) is the sense in which Eliezer is using the word “crazy”. Of course, I could just be stupid.
What does it mean to say that you are “capable of solving [a problem] but are unwilling to update on [it] due to ideological reasons”? You obviously don’t mean something like the sense in which I’m capable of opening the window but I’m unwilling to because I don’t want the cold air to get in. Aumann isn’t thinking to himself, “Yeah, I could update, but doing so would conflict with my ideology.” So, tabooing “capable” and “unwilling”, can you explain what it means to be “capable but unwilling”?
What leads you to suggest Aumann isn’t thinking that? Are you saying he is unaware that his ideological beliefs conflict with evidence to the contrary? Of course he is aware he could update on rational evidence and chooses not to, that’s what smart religious people do. That’s what faith is. The meaning of “capable but unwilling” should be clear: it is the ability to maintain belief in something in the face of convincing evidence opposing it. The ability to say, “Your evidence supporting y is compelling, but it doesn’t matter, because I have faith in x.” And that’s what I think crazy is.
What leads you to suggest Aumann isn’t thinking that?
That I’ve met smart religious people who don’t think that way, and I expect that Aumann is at least as smart as they are.
There are intellectual religious people who believe that they’ve updated on all the evidence, taken it all into account, ignored none of it, and concluded that, say, Young Earth Creationism is the best account of the evidence.
You and I can see that they are ignoring evidence, or failing to weigh it properly, and that their ideology is blinding them. But that is not their own account of what’s going on in their heads. They are not aware of any conscious decision on their part to ignore evidence. So it’s subtle and tricky to unpack what it means for them to be “capable but unwilling” to update.
ETA: Your unpacking of “capable but unwilling” uses the word “ability”, which does not illuminate the meaning of “capable”. And you’ve used the phrase “convincing evidence” in a sense that clearly does not mean that the evidence did in fact convince them. So, additionally tabooing “ability” and “convincing”, what does “capable but unwilling” mean?
The behavior eirenicon complained of amounts to denying modus ponens. “I accept X, and I accept X->Y, but I deny Y.”
Defying the data, otoh, is a correct application of a contrapositive. “You claim X, and I accept X->Y, but I deny Y, and therefore I deny X. I have updated on your claim, but that wasn’t nearly enough to reverse the total weight of evidence about Y.” The difference is that this doesn’t involve saying that logical contradictions are ok, so if you ever see enough evidence for X that you can’t deny it all, you know something’s wrong.
Wouldn’t defying the data more mean “I deny that X, on it’s own, is sufficient to justify Y. I’ve updated based on X, but there was plenty of reason to have really low prior belief in Y and X, on it’s own, isn’t sufficient to overcome that, although it definitely is something we should look into, replicate the experiment, see what’s going on, etc...”?
Yes, but there’s also the part about “~Y predicts ~X, so I predict a decent chance that X will turn out to not be what you thought it was.” Which is why replication is one of the proposed next steps; and is also, I think, the part that RichardKennaway pointed to as a parallel.
To clarify the difference between crazy and stupid:
What is stupidity other than a relative inability to solve problems? How is crazy not just a particular kind of stupid?
Stupidity is the lack of mental horsepower. A stupid person has a weak or inefficient “cognitive CPU”.
Craziness is when the output of the “program” doesn’t correlate reliably with reality due to bugs in the “source code”. A crazy person has a flawed “cognitive algorithm”.
It seems that in humans, source code can be revised to a certain degree, but processing power is difficult (though not impossible) to upgrade.
So calling someone crazy (for the time being) is certainly different from calling someone stupid.
Excellent distinction, Yasser.
I would add one more case:
Wrongness is when the output of the “program” doesn’t correlate reliably with reality. But this could happen not only because the algorithm is flawed (wrong because crazy), but also because of insufficient or incorrect input. I think this is an important distinction, because the person can be smart (non-stupid) and rational (non-irrational = non-crazy) but still wrong nevertheless — and those around would call him “crazy” or “stupid” undeservedly.
Example: CEOs taking calculated risks but being fired because the company, guided by him, flipped the coin and got head instead of the desired tail. Stakeholders expected him to be omniscient.
Those CEOs who get it right will be perceived as omniscient gurus. Hindsight bias will make them write books on how to be successful; survivorship bias will lure people into buying them.
Not being crazy makes your output less wrong. But doesn’t guarantee it to be right, either.
If I didn’t get it wrong in my analysis above (puns intended), would it be fair to say that this community, having the mission to fix the biases in our algorithms, should be even more appropriately called Less Crazy instead?
Also yup.
I am not sure this type of “craziness” itself is always a bug.
Irrational beliefs and behaviors often have perfectly rational explanations that make sense from a mental health point of view: humans are more emotional then logical creatures. Internally coping with (often unconscious) emotional problems can be a higher priority personal task than correlating with reality in every possible respect.
An emotion that doesn’t correlate with reality is itself a bug. Sure, it may not be easy to fix (or even possible without brain-hacking), but it’s a bug in the human source code nonetheless.
To extend the analogy, it’s like a bug in the operating system. If that low-level bug causes a higher-level program to malfunction, you can still blame “buggy code” even if the higher-level program itself is bug-free.
If you would design a system with optimal resource usage for certain operating conditions, then you do not consider a failure outside operating conditions a bug. You can always make the system more and more reliable at the expense of higher resource usage, but even in human engineered systems, over-design is considered to be a mistake.
I don’t want to argue that the brain is an optimal trade-off in this sense, only that it is extremely hard to tell the genuine bugs from the fixes with some strange side-effects. Maybe the question itself is meaningless.
I am rather surprised by the fact that although the human brain was not evolved to be an abstract theorem prover but the controller of a procreation machine, it still performs remarkably well in quite a few logical and rational domains.
I suppose you’re saying that when a useful heuristic (allowing real-time approximate solutions to computationally hard problems) leads to biases in edge cases, it shouldn’t be considered a bug because the trade-off is necessary for survival in a fast-paced world.
I might disagree, but then we’d just be bickering about which labels to use within the analogy, which hardly seems useful. I suppose that instead of using the word “bug” for such situations, we could say that an imprecise algorithm is necessary because of a “hardware limitation” of the brain.
However, so long as there are more precise algorithms that can run on the same hardware (debiasing techniques), I would still consider the inferior algorithm to be “craziness”.
Even if it’s advantageous to the agent’s goals (not evolutionary fitness)? Emotions don’t have XML tags that say “this should map to reality in the following way”.
My response was to Christian’s implication that a rationality program isn’t necessarily buggy for outputting irrational behaviors because it must account for human emotions. My point was that human emotions are part of the human rationality program (whether we can edit our source code or not) and that if they cause an otherwise bug-free rationality program to output irrational behaviors, then the emotions themselves are the bugs.
In your response, you asked about emotions that produce behaviors advantageous to the agent’s goals, which is rational behavior, not irrational behavior as was stipulated in Christian’s post.
If those emotions are part of an otherwise bug-free rationality program that outputs rational beliefs and behaviors, then there is no bug. And that’s what it means for an emotion to be correlated with reality, precisely because there are no XML tags mapping certain neural spike patterns (i.e. emotions) to the state of reality.
Emotions aren’t beliefs about the world that can be verified by looking at the territory. Emotions are threads within the running program that maps and traverses the territory, so the only thing it can mean for them to correlate with reality is that they don’t cause the program to malfunction.
What I was trying to point out to Christian is that emotions are part of the system, not outside of it. So if the system produces irrational behavior, then the system as a whole is irrational, even if some of the subroutines are rational in isolation.
The irrationality of the emotions don’t somehow cancel out with the irrationality of the outputs to make the whole system rational.
A male having a higher opinion of himself (pride) than he realistically deserves may prove evolutionarily advantageous. If this disconnect from reality improves reproductive fitness then it can’t be considered a bug.
Of course it can be considered a bug, if I do the considering and I don’t give two cupcakes for reproductive fitness.
and if what was advantageous at one time is now a liability
If anything, your lack of concern for your prime directive, i.e. reproduction, is a sure sign of defective programming.
Now adjust your thinking and attitude (bug fix) and get out there and score some babes.
To use your analogy. Any person who doesn’t provide the expected output is often deemed crazy… It doesn’t mean that there is a bug in the person, perhaps sometimes it’s a bug in reality.
I’ve talked to a number of people who most would call crazy (none of them went to the mad house—at least that I know of). When you begin to look at things from their perspective you sometimes find that they see patterns others are missing; but lack the social graces and unique way or inability to relate those patterns to others is lost.
On the other hand, I think that we are all “crazy” and “stupid” in our own ways. I think there are really extreme cases of both.
In the context of my analogy, it’s nonsense to say that reality can have bugs.
I suppose you meant that sometimes the majority of people can share the same bug, which causes them to “deem” that someone who lacks the bug (and outputs accordingly) is crazy.
But there’s still an actual territory that each program either does or does not map properly, regardless of society’s current most popular map. So it’s meaningful to define “craziness” in terms of the actual territory, even if it’s occassionaly difficult to determine whether 1 person is crazy or “everyone else” is.
I suppose what I was referring to is a spec bug; the bug is in expecting the wrong (accepted by society) output. Not an actual “the universe hiccuped and needs to be rebooted.” The reason for the spec bug might not be a shared bug, but programs operating on different inputs. For instance, Tesla… Anyone who knew Tesla described him as an odd man, and a little crazy. At the same time, he purposefully filled his input buffer with the latest research on electricity and purposefully processed that data differently than his peers in the field. He didn’t spend much time accumulating input on proper social behavior, or on how others would judge him on the streets. It is seen as a crazy thing to do, to pick up wounded pidgins on the street, take them home and nurse them back to health. Because the spec of the time (norms of society) say it was odd to do.
An old friend of mine who I haven’t seen in years is an artist. He’s a creative minded person who thinks that rationality would tie his hands too much. That said, when I was younger it surprised me the types of puzzles he was able to solve because he’d try the thing that seemed irrational.
Stupidity means having less knowledge and/or fewer reasoning processes to work with, and so reaching fewer and/or less complicated conclusions. Craziness means reaching incorrect conclusions, due to incorrect knowledge or processes.
Thanks. That does shed light on the distinction for me.
Crazy people refuse to follow some relatively straightforward procedure that allows to achieve their goals or prevent terrible disutility. Lazy people want to follow the overall procedure, but can’t manage to perform particular steps. Stupid people may be incapable of following the procedure or even of learning about its existence.
This probably means that they failed to recognize that the procedure is straightfoward and would allow them to achieve their goals.
This, in turn, probably means that they failed to apply whichever 2nd-order procedure would demonstrate that the first procedure was so straightforward and surefire.
Why aren’t they stupid for their inability to apply this 2nd-order procedure (or 3rd-order or however many it takes to bottom out)? Is the claim that, at some point, the nth-order procedure for establishing the straightforwardness and surefire-ness of the lower-order procedures is not itself straightforward and surefire?
Crazy people will refuse to search for a sufficiently-meta debiasing procedure that would otherwise allow them to see that they should do that.
The fact that you have to follow n-th order procedure of debiasing yourself is nontrivial, so not knowing that it’s important doesn’t identify people as stupid, but still leads to them remaining crazy.
We want to at least distinguish between
success at practical projects that involve planning over several tasks related by a dependence graph, possibly branching or looping
talent in some particular domain (e.g. writing fiction, sleuthing, managing people, math, word puzzles)
skill at memorization, accumulation of theoretical knowledge
adopting behaviors which enhance or preserve your long-term prospects
Someone lacking the first would be called stupid, someone lacking the last would be called crazy (though we also use the term for mental illnesses, which are a different thing again).
Intelligence seems to be at least somewhat modular (idiot savants being the canonical demonstration). I wonder if there is some classification of biases by which parts of intelligence are affected.
How are the examples in the post not counterexamples to what you’re saying? These are separate concepts.
Stupid is when you are unable to solve a problem. Lazy is when you are able to solve a problem but don’t care to. Crazy is when you are able to solve a problem but don’t want to.
That is not the sense of “crazy” that Eliezer is using. Maybe you could say that crazy is when you think that you have a solution but you don’t (ETA: and you ought to be able to see that). But that seems like a special case of stupid.
Are you sure? It seems to me that having an intellectual problem that you are capable of solving but are unwilling to update on due to ideological reasons or otherwise (eg Aumann) is the sense in which Eliezer is using the word “crazy”. Of course, I could just be stupid.
What does it mean to say that you are “capable of solving [a problem] but are unwilling to update on [it] due to ideological reasons”? You obviously don’t mean something like the sense in which I’m capable of opening the window but I’m unwilling to because I don’t want the cold air to get in. Aumann isn’t thinking to himself, “Yeah, I could update, but doing so would conflict with my ideology.” So, tabooing “capable” and “unwilling”, can you explain what it means to be “capable but unwilling”?
What leads you to suggest Aumann isn’t thinking that? Are you saying he is unaware that his ideological beliefs conflict with evidence to the contrary? Of course he is aware he could update on rational evidence and chooses not to, that’s what smart religious people do. That’s what faith is. The meaning of “capable but unwilling” should be clear: it is the ability to maintain belief in something in the face of convincing evidence opposing it. The ability to say, “Your evidence supporting y is compelling, but it doesn’t matter, because I have faith in x.” And that’s what I think crazy is.
That I’ve met smart religious people who don’t think that way, and I expect that Aumann is at least as smart as they are.
There are intellectual religious people who believe that they’ve updated on all the evidence, taken it all into account, ignored none of it, and concluded that, say, Young Earth Creationism is the best account of the evidence.
You and I can see that they are ignoring evidence, or failing to weigh it properly, and that their ideology is blinding them. But that is not their own account of what’s going on in their heads. They are not aware of any conscious decision on their part to ignore evidence. So it’s subtle and tricky to unpack what it means for them to be “capable but unwilling” to update.
ETA: Your unpacking of “capable but unwilling” uses the word “ability”, which does not illuminate the meaning of “capable”. And you’ve used the phrase “convincing evidence” in a sense that clearly does not mean that the evidence did in fact convince them. So, additionally tabooing “ability” and “convincing”, what does “capable but unwilling” mean?
This crazy?
Not the same thing.
The behavior eirenicon complained of amounts to denying modus ponens. “I accept X, and I accept X->Y, but I deny Y.”
Defying the data, otoh, is a correct application of a contrapositive. “You claim X, and I accept X->Y, but I deny Y, and therefore I deny X. I have updated on your claim, but that wasn’t nearly enough to reverse the total weight of evidence about Y.” The difference is that this doesn’t involve saying that logical contradictions are ok, so if you ever see enough evidence for X that you can’t deny it all, you know something’s wrong.
Wouldn’t defying the data more mean “I deny that X, on it’s own, is sufficient to justify Y. I’ve updated based on X, but there was plenty of reason to have really low prior belief in Y and X, on it’s own, isn’t sufficient to overcome that, although it definitely is something we should look into, replicate the experiment, see what’s going on, etc...”?
Yes, but there’s also the part about “~Y predicts ~X, so I predict a decent chance that X will turn out to not be what you thought it was.” Which is why replication is one of the proposed next steps; and is also, I think, the part that RichardKennaway pointed to as a parallel.