I am not sure this type of “craziness” itself is always a bug.
Irrational beliefs and behaviors often have perfectly rational explanations that make sense from a mental health point of view: humans are more emotional then logical creatures. Internally coping with (often unconscious) emotional problems can be a higher priority personal task than correlating with reality in every possible respect.
An emotion that doesn’t correlate with reality is itself a bug. Sure, it may not be easy to fix (or even possible without brain-hacking), but it’s a bug in the human source code nonetheless.
To extend the analogy, it’s like a bug in the operating system. If that low-level bug causes a higher-level program to malfunction, you can still blame “buggy code” even if the higher-level program itself is bug-free.
If you would design a system with optimal resource usage for certain operating conditions, then you do not consider a failure outside operating conditions a bug. You can always make the system more and more reliable at the expense of higher resource usage, but even in human engineered systems, over-design is considered to be a mistake.
I don’t want to argue that the brain is an optimal trade-off in this sense, only that it is extremely hard to tell the genuine bugs from the fixes with some strange side-effects. Maybe the question itself is meaningless.
I am rather surprised by the fact that although the human brain was not evolved to be an abstract theorem prover but the controller of a procreation machine, it still performs remarkably well in quite a few logical and rational domains.
I suppose you’re saying that when a useful heuristic (allowing real-time approximate solutions to computationally hard problems) leads to biases in edge cases, it shouldn’t be considered a bug because the trade-off is necessary for survival in a fast-paced world.
I might disagree, but then we’d just be bickering about which labels to use within the analogy, which hardly seems useful. I suppose that instead of using the word “bug” for such situations, we could say that an imprecise algorithm is necessary because of a “hardware limitation” of the brain.
However, so long as there are more precise algorithms that can run on the same hardware (debiasing techniques), I would still consider the inferior algorithm to be “craziness”.
An emotion that doesn’t correlate with reality is itself a bug.
Even if it’s advantageous to the agent’s goals (not evolutionary fitness)? Emotions don’t have XML tags that say “this should map to reality in the following way”.
My response was to Christian’s implication that a rationality program isn’t necessarily buggy for outputting irrational behaviors because it must account for human emotions. My point was that human emotions are part of the human rationality program (whether we can edit our source code or not) and that if they cause an otherwise bug-free rationality program to output irrational behaviors, then the emotions themselves are the bugs.
In your response, you asked about emotions that produce behaviors advantageous to the agent’s goals, which is rational behavior, not irrational behavior as was stipulated in Christian’s post.
If those emotions are part of an otherwise bug-free rationality program that outputs rational beliefs and behaviors, then there is no bug. And that’s what it means for an emotion to be correlated with reality, precisely because there are no XML tags mapping certain neural spike patterns (i.e. emotions) to the state of reality.
Emotions aren’t beliefs about the world that can be verified by looking at the territory. Emotions are threads within the running program that maps and traverses the territory, so the only thing it can mean for them to correlate with reality is that they don’t cause the program to malfunction.
What I was trying to point out to Christian is that emotions are part of the system, not outside of it. So if the system produces irrational behavior, then the system as a whole is irrational, even if some of the subroutines are rational in isolation.
The irrationality of the emotions don’t somehow cancel out with the irrationality of the outputs to make the whole system rational.
A male having a higher opinion of himself (pride) than he realistically deserves may prove evolutionarily advantageous. If this disconnect from reality improves reproductive fitness then it can’t be considered a bug.
I am not sure this type of “craziness” itself is always a bug.
Irrational beliefs and behaviors often have perfectly rational explanations that make sense from a mental health point of view: humans are more emotional then logical creatures. Internally coping with (often unconscious) emotional problems can be a higher priority personal task than correlating with reality in every possible respect.
An emotion that doesn’t correlate with reality is itself a bug. Sure, it may not be easy to fix (or even possible without brain-hacking), but it’s a bug in the human source code nonetheless.
To extend the analogy, it’s like a bug in the operating system. If that low-level bug causes a higher-level program to malfunction, you can still blame “buggy code” even if the higher-level program itself is bug-free.
If you would design a system with optimal resource usage for certain operating conditions, then you do not consider a failure outside operating conditions a bug. You can always make the system more and more reliable at the expense of higher resource usage, but even in human engineered systems, over-design is considered to be a mistake.
I don’t want to argue that the brain is an optimal trade-off in this sense, only that it is extremely hard to tell the genuine bugs from the fixes with some strange side-effects. Maybe the question itself is meaningless.
I am rather surprised by the fact that although the human brain was not evolved to be an abstract theorem prover but the controller of a procreation machine, it still performs remarkably well in quite a few logical and rational domains.
I suppose you’re saying that when a useful heuristic (allowing real-time approximate solutions to computationally hard problems) leads to biases in edge cases, it shouldn’t be considered a bug because the trade-off is necessary for survival in a fast-paced world.
I might disagree, but then we’d just be bickering about which labels to use within the analogy, which hardly seems useful. I suppose that instead of using the word “bug” for such situations, we could say that an imprecise algorithm is necessary because of a “hardware limitation” of the brain.
However, so long as there are more precise algorithms that can run on the same hardware (debiasing techniques), I would still consider the inferior algorithm to be “craziness”.
Even if it’s advantageous to the agent’s goals (not evolutionary fitness)? Emotions don’t have XML tags that say “this should map to reality in the following way”.
My response was to Christian’s implication that a rationality program isn’t necessarily buggy for outputting irrational behaviors because it must account for human emotions. My point was that human emotions are part of the human rationality program (whether we can edit our source code or not) and that if they cause an otherwise bug-free rationality program to output irrational behaviors, then the emotions themselves are the bugs.
In your response, you asked about emotions that produce behaviors advantageous to the agent’s goals, which is rational behavior, not irrational behavior as was stipulated in Christian’s post.
If those emotions are part of an otherwise bug-free rationality program that outputs rational beliefs and behaviors, then there is no bug. And that’s what it means for an emotion to be correlated with reality, precisely because there are no XML tags mapping certain neural spike patterns (i.e. emotions) to the state of reality.
Emotions aren’t beliefs about the world that can be verified by looking at the territory. Emotions are threads within the running program that maps and traverses the territory, so the only thing it can mean for them to correlate with reality is that they don’t cause the program to malfunction.
What I was trying to point out to Christian is that emotions are part of the system, not outside of it. So if the system produces irrational behavior, then the system as a whole is irrational, even if some of the subroutines are rational in isolation.
The irrationality of the emotions don’t somehow cancel out with the irrationality of the outputs to make the whole system rational.
A male having a higher opinion of himself (pride) than he realistically deserves may prove evolutionarily advantageous. If this disconnect from reality improves reproductive fitness then it can’t be considered a bug.
Of course it can be considered a bug, if I do the considering and I don’t give two cupcakes for reproductive fitness.
and if what was advantageous at one time is now a liability
If anything, your lack of concern for your prime directive, i.e. reproduction, is a sure sign of defective programming.
Now adjust your thinking and attitude (bug fix) and get out there and score some babes.