If you would design a system with optimal resource usage for certain operating conditions, then you do not consider a failure outside operating conditions a bug. You can always make the system more and more reliable at the expense of higher resource usage, but even in human engineered systems, over-design is considered to be a mistake.
I don’t want to argue that the brain is an optimal trade-off in this sense, only that it is extremely hard to tell the genuine bugs from the fixes with some strange side-effects. Maybe the question itself is meaningless.
I am rather surprised by the fact that although the human brain was not evolved to be an abstract theorem prover but the controller of a procreation machine, it still performs remarkably well in quite a few logical and rational domains.
I suppose you’re saying that when a useful heuristic (allowing real-time approximate solutions to computationally hard problems) leads to biases in edge cases, it shouldn’t be considered a bug because the trade-off is necessary for survival in a fast-paced world.
I might disagree, but then we’d just be bickering about which labels to use within the analogy, which hardly seems useful. I suppose that instead of using the word “bug” for such situations, we could say that an imprecise algorithm is necessary because of a “hardware limitation” of the brain.
However, so long as there are more precise algorithms that can run on the same hardware (debiasing techniques), I would still consider the inferior algorithm to be “craziness”.
If you would design a system with optimal resource usage for certain operating conditions, then you do not consider a failure outside operating conditions a bug. You can always make the system more and more reliable at the expense of higher resource usage, but even in human engineered systems, over-design is considered to be a mistake.
I don’t want to argue that the brain is an optimal trade-off in this sense, only that it is extremely hard to tell the genuine bugs from the fixes with some strange side-effects. Maybe the question itself is meaningless.
I am rather surprised by the fact that although the human brain was not evolved to be an abstract theorem prover but the controller of a procreation machine, it still performs remarkably well in quite a few logical and rational domains.
I suppose you’re saying that when a useful heuristic (allowing real-time approximate solutions to computationally hard problems) leads to biases in edge cases, it shouldn’t be considered a bug because the trade-off is necessary for survival in a fast-paced world.
I might disagree, but then we’d just be bickering about which labels to use within the analogy, which hardly seems useful. I suppose that instead of using the word “bug” for such situations, we could say that an imprecise algorithm is necessary because of a “hardware limitation” of the brain.
However, so long as there are more precise algorithms that can run on the same hardware (debiasing techniques), I would still consider the inferior algorithm to be “craziness”.