Stupidity is the lack of mental horsepower. A stupid person has a weak or inefficient “cognitive CPU”.
Craziness is when the output of the “program” doesn’t correlate reliably with reality due to bugs in the “source code”. A crazy person has a flawed “cognitive algorithm”.
It seems that in humans, source code can be revised to a certain degree, but processing power is difficult (though not impossible) to upgrade.
So calling someone crazy (for the time being) is certainly different from calling someone stupid.
Wrongness is when the output of the “program” doesn’t correlate reliably with reality. But this could happen not only because the algorithm is flawed (wrong because crazy), but also because of insufficient or incorrect input. I think this is an important distinction, because the person can be smart (non-stupid) and rational (non-irrational = non-crazy) but still wrong nevertheless — and those around would call him “crazy” or “stupid” undeservedly.
Example: CEOs taking calculated risks but being fired because the company, guided by him, flipped the coin and got head instead of the desired tail. Stakeholders expected him to be omniscient.
Those CEOs who get it right will be perceived as omniscient gurus. Hindsight bias will make them write books on how to be successful; survivorship bias will lure people into buying them.
Not being crazy makes your output less wrong. But doesn’t guarantee it to be right, either.
If I didn’t get it wrong in my analysis above (puns intended), would it be fair to say that this community, having the mission to fix the biases in our algorithms, should be even more appropriately called Less Crazy instead?
I am not sure this type of “craziness” itself is always a bug.
Irrational beliefs and behaviors often have perfectly rational explanations that make sense from a mental health point of view: humans are more emotional then logical creatures. Internally coping with (often unconscious) emotional problems can be a higher priority personal task than correlating with reality in every possible respect.
An emotion that doesn’t correlate with reality is itself a bug. Sure, it may not be easy to fix (or even possible without brain-hacking), but it’s a bug in the human source code nonetheless.
To extend the analogy, it’s like a bug in the operating system. If that low-level bug causes a higher-level program to malfunction, you can still blame “buggy code” even if the higher-level program itself is bug-free.
If you would design a system with optimal resource usage for certain operating conditions, then you do not consider a failure outside operating conditions a bug. You can always make the system more and more reliable at the expense of higher resource usage, but even in human engineered systems, over-design is considered to be a mistake.
I don’t want to argue that the brain is an optimal trade-off in this sense, only that it is extremely hard to tell the genuine bugs from the fixes with some strange side-effects. Maybe the question itself is meaningless.
I am rather surprised by the fact that although the human brain was not evolved to be an abstract theorem prover but the controller of a procreation machine, it still performs remarkably well in quite a few logical and rational domains.
I suppose you’re saying that when a useful heuristic (allowing real-time approximate solutions to computationally hard problems) leads to biases in edge cases, it shouldn’t be considered a bug because the trade-off is necessary for survival in a fast-paced world.
I might disagree, but then we’d just be bickering about which labels to use within the analogy, which hardly seems useful. I suppose that instead of using the word “bug” for such situations, we could say that an imprecise algorithm is necessary because of a “hardware limitation” of the brain.
However, so long as there are more precise algorithms that can run on the same hardware (debiasing techniques), I would still consider the inferior algorithm to be “craziness”.
An emotion that doesn’t correlate with reality is itself a bug.
Even if it’s advantageous to the agent’s goals (not evolutionary fitness)? Emotions don’t have XML tags that say “this should map to reality in the following way”.
My response was to Christian’s implication that a rationality program isn’t necessarily buggy for outputting irrational behaviors because it must account for human emotions. My point was that human emotions are part of the human rationality program (whether we can edit our source code or not) and that if they cause an otherwise bug-free rationality program to output irrational behaviors, then the emotions themselves are the bugs.
In your response, you asked about emotions that produce behaviors advantageous to the agent’s goals, which is rational behavior, not irrational behavior as was stipulated in Christian’s post.
If those emotions are part of an otherwise bug-free rationality program that outputs rational beliefs and behaviors, then there is no bug. And that’s what it means for an emotion to be correlated with reality, precisely because there are no XML tags mapping certain neural spike patterns (i.e. emotions) to the state of reality.
Emotions aren’t beliefs about the world that can be verified by looking at the territory. Emotions are threads within the running program that maps and traverses the territory, so the only thing it can mean for them to correlate with reality is that they don’t cause the program to malfunction.
What I was trying to point out to Christian is that emotions are part of the system, not outside of it. So if the system produces irrational behavior, then the system as a whole is irrational, even if some of the subroutines are rational in isolation.
The irrationality of the emotions don’t somehow cancel out with the irrationality of the outputs to make the whole system rational.
A male having a higher opinion of himself (pride) than he realistically deserves may prove evolutionarily advantageous. If this disconnect from reality improves reproductive fitness then it can’t be considered a bug.
To use your analogy. Any person who doesn’t provide the expected output is often deemed crazy… It doesn’t mean that there is a bug in the person, perhaps sometimes it’s a bug in reality.
I’ve talked to a number of people who most would call crazy (none of them went to the mad house—at least that I know of). When you begin to look at things from their perspective you sometimes find that they see patterns others are missing; but lack the social graces and unique way or inability to relate those patterns to others is lost.
On the other hand, I think that we are all “crazy” and “stupid” in our own ways. I think there are really extreme cases of both.
To use your analogy. Any person who doesn’t provide the expected output is often deemed crazy… It doesn’t mean that there is a bug in the person, perhaps sometimes it’s a bug in reality.
In the context of my analogy, it’s nonsense to say that reality can have bugs.
I suppose you meant that sometimes the majority of people can share the same bug, which causes them to “deem” that someone who lacks the bug (and outputs accordingly) is crazy.
But there’s still an actual territory that each program either does or does not map properly, regardless of society’s current most popular map. So it’s meaningful to define “craziness” in terms of the actual territory, even if it’s occassionaly difficult to determine whether 1 person is crazy or “everyone else” is.
I suppose what I was referring to is a spec bug; the bug is in expecting the wrong (accepted by society) output. Not an actual “the universe hiccuped and needs to be rebooted.” The reason for the spec bug might not be a shared bug, but programs operating on different inputs. For instance, Tesla… Anyone who knew Tesla described him as an odd man, and a little crazy. At the same time, he purposefully filled his input buffer with the latest research on electricity and purposefully processed that data differently than his peers in the field. He didn’t spend much time accumulating input on proper social behavior, or on how others would judge him on the streets. It is seen as a crazy thing to do, to pick up wounded pidgins on the street, take them home and nurse them back to health. Because the spec of the time (norms of society) say it was odd to do.
An old friend of mine who I haven’t seen in years is an artist. He’s a creative minded person who thinks that rationality would tie his hands too much. That said, when I was younger it surprised me the types of puzzles he was able to solve because he’d try the thing that seemed irrational.
Stupidity is the lack of mental horsepower. A stupid person has a weak or inefficient “cognitive CPU”.
Craziness is when the output of the “program” doesn’t correlate reliably with reality due to bugs in the “source code”. A crazy person has a flawed “cognitive algorithm”.
It seems that in humans, source code can be revised to a certain degree, but processing power is difficult (though not impossible) to upgrade.
So calling someone crazy (for the time being) is certainly different from calling someone stupid.
Excellent distinction, Yasser.
I would add one more case:
Wrongness is when the output of the “program” doesn’t correlate reliably with reality. But this could happen not only because the algorithm is flawed (wrong because crazy), but also because of insufficient or incorrect input. I think this is an important distinction, because the person can be smart (non-stupid) and rational (non-irrational = non-crazy) but still wrong nevertheless — and those around would call him “crazy” or “stupid” undeservedly.
Example: CEOs taking calculated risks but being fired because the company, guided by him, flipped the coin and got head instead of the desired tail. Stakeholders expected him to be omniscient.
Those CEOs who get it right will be perceived as omniscient gurus. Hindsight bias will make them write books on how to be successful; survivorship bias will lure people into buying them.
Not being crazy makes your output less wrong. But doesn’t guarantee it to be right, either.
If I didn’t get it wrong in my analysis above (puns intended), would it be fair to say that this community, having the mission to fix the biases in our algorithms, should be even more appropriately called Less Crazy instead?
Also yup.
I am not sure this type of “craziness” itself is always a bug.
Irrational beliefs and behaviors often have perfectly rational explanations that make sense from a mental health point of view: humans are more emotional then logical creatures. Internally coping with (often unconscious) emotional problems can be a higher priority personal task than correlating with reality in every possible respect.
An emotion that doesn’t correlate with reality is itself a bug. Sure, it may not be easy to fix (or even possible without brain-hacking), but it’s a bug in the human source code nonetheless.
To extend the analogy, it’s like a bug in the operating system. If that low-level bug causes a higher-level program to malfunction, you can still blame “buggy code” even if the higher-level program itself is bug-free.
If you would design a system with optimal resource usage for certain operating conditions, then you do not consider a failure outside operating conditions a bug. You can always make the system more and more reliable at the expense of higher resource usage, but even in human engineered systems, over-design is considered to be a mistake.
I don’t want to argue that the brain is an optimal trade-off in this sense, only that it is extremely hard to tell the genuine bugs from the fixes with some strange side-effects. Maybe the question itself is meaningless.
I am rather surprised by the fact that although the human brain was not evolved to be an abstract theorem prover but the controller of a procreation machine, it still performs remarkably well in quite a few logical and rational domains.
I suppose you’re saying that when a useful heuristic (allowing real-time approximate solutions to computationally hard problems) leads to biases in edge cases, it shouldn’t be considered a bug because the trade-off is necessary for survival in a fast-paced world.
I might disagree, but then we’d just be bickering about which labels to use within the analogy, which hardly seems useful. I suppose that instead of using the word “bug” for such situations, we could say that an imprecise algorithm is necessary because of a “hardware limitation” of the brain.
However, so long as there are more precise algorithms that can run on the same hardware (debiasing techniques), I would still consider the inferior algorithm to be “craziness”.
Even if it’s advantageous to the agent’s goals (not evolutionary fitness)? Emotions don’t have XML tags that say “this should map to reality in the following way”.
My response was to Christian’s implication that a rationality program isn’t necessarily buggy for outputting irrational behaviors because it must account for human emotions. My point was that human emotions are part of the human rationality program (whether we can edit our source code or not) and that if they cause an otherwise bug-free rationality program to output irrational behaviors, then the emotions themselves are the bugs.
In your response, you asked about emotions that produce behaviors advantageous to the agent’s goals, which is rational behavior, not irrational behavior as was stipulated in Christian’s post.
If those emotions are part of an otherwise bug-free rationality program that outputs rational beliefs and behaviors, then there is no bug. And that’s what it means for an emotion to be correlated with reality, precisely because there are no XML tags mapping certain neural spike patterns (i.e. emotions) to the state of reality.
Emotions aren’t beliefs about the world that can be verified by looking at the territory. Emotions are threads within the running program that maps and traverses the territory, so the only thing it can mean for them to correlate with reality is that they don’t cause the program to malfunction.
What I was trying to point out to Christian is that emotions are part of the system, not outside of it. So if the system produces irrational behavior, then the system as a whole is irrational, even if some of the subroutines are rational in isolation.
The irrationality of the emotions don’t somehow cancel out with the irrationality of the outputs to make the whole system rational.
A male having a higher opinion of himself (pride) than he realistically deserves may prove evolutionarily advantageous. If this disconnect from reality improves reproductive fitness then it can’t be considered a bug.
Of course it can be considered a bug, if I do the considering and I don’t give two cupcakes for reproductive fitness.
and if what was advantageous at one time is now a liability
If anything, your lack of concern for your prime directive, i.e. reproduction, is a sure sign of defective programming.
Now adjust your thinking and attitude (bug fix) and get out there and score some babes.
To use your analogy. Any person who doesn’t provide the expected output is often deemed crazy… It doesn’t mean that there is a bug in the person, perhaps sometimes it’s a bug in reality.
I’ve talked to a number of people who most would call crazy (none of them went to the mad house—at least that I know of). When you begin to look at things from their perspective you sometimes find that they see patterns others are missing; but lack the social graces and unique way or inability to relate those patterns to others is lost.
On the other hand, I think that we are all “crazy” and “stupid” in our own ways. I think there are really extreme cases of both.
In the context of my analogy, it’s nonsense to say that reality can have bugs.
I suppose you meant that sometimes the majority of people can share the same bug, which causes them to “deem” that someone who lacks the bug (and outputs accordingly) is crazy.
But there’s still an actual territory that each program either does or does not map properly, regardless of society’s current most popular map. So it’s meaningful to define “craziness” in terms of the actual territory, even if it’s occassionaly difficult to determine whether 1 person is crazy or “everyone else” is.
I suppose what I was referring to is a spec bug; the bug is in expecting the wrong (accepted by society) output. Not an actual “the universe hiccuped and needs to be rebooted.” The reason for the spec bug might not be a shared bug, but programs operating on different inputs. For instance, Tesla… Anyone who knew Tesla described him as an odd man, and a little crazy. At the same time, he purposefully filled his input buffer with the latest research on electricity and purposefully processed that data differently than his peers in the field. He didn’t spend much time accumulating input on proper social behavior, or on how others would judge him on the streets. It is seen as a crazy thing to do, to pick up wounded pidgins on the street, take them home and nurse them back to health. Because the spec of the time (norms of society) say it was odd to do.
An old friend of mine who I haven’t seen in years is an artist. He’s a creative minded person who thinks that rationality would tie his hands too much. That said, when I was younger it surprised me the types of puzzles he was able to solve because he’d try the thing that seemed irrational.