My response was to Christian’s implication that a rationality program isn’t necessarily buggy for outputting irrational behaviors because it must account for human emotions. My point was that human emotions are part of the human rationality program (whether we can edit our source code or not) and that if they cause an otherwise bug-free rationality program to output irrational behaviors, then the emotions themselves are the bugs.
In your response, you asked about emotions that produce behaviors advantageous to the agent’s goals, which is rational behavior, not irrational behavior as was stipulated in Christian’s post.
If those emotions are part of an otherwise bug-free rationality program that outputs rational beliefs and behaviors, then there is no bug. And that’s what it means for an emotion to be correlated with reality, precisely because there are no XML tags mapping certain neural spike patterns (i.e. emotions) to the state of reality.
Emotions aren’t beliefs about the world that can be verified by looking at the territory. Emotions are threads within the running program that maps and traverses the territory, so the only thing it can mean for them to correlate with reality is that they don’t cause the program to malfunction.
What I was trying to point out to Christian is that emotions are part of the system, not outside of it. So if the system produces irrational behavior, then the system as a whole is irrational, even if some of the subroutines are rational in isolation.
The irrationality of the emotions don’t somehow cancel out with the irrationality of the outputs to make the whole system rational.
My response was to Christian’s implication that a rationality program isn’t necessarily buggy for outputting irrational behaviors because it must account for human emotions. My point was that human emotions are part of the human rationality program (whether we can edit our source code or not) and that if they cause an otherwise bug-free rationality program to output irrational behaviors, then the emotions themselves are the bugs.
In your response, you asked about emotions that produce behaviors advantageous to the agent’s goals, which is rational behavior, not irrational behavior as was stipulated in Christian’s post.
If those emotions are part of an otherwise bug-free rationality program that outputs rational beliefs and behaviors, then there is no bug. And that’s what it means for an emotion to be correlated with reality, precisely because there are no XML tags mapping certain neural spike patterns (i.e. emotions) to the state of reality.
Emotions aren’t beliefs about the world that can be verified by looking at the territory. Emotions are threads within the running program that maps and traverses the territory, so the only thing it can mean for them to correlate with reality is that they don’t cause the program to malfunction.
What I was trying to point out to Christian is that emotions are part of the system, not outside of it. So if the system produces irrational behavior, then the system as a whole is irrational, even if some of the subroutines are rational in isolation.
The irrationality of the emotions don’t somehow cancel out with the irrationality of the outputs to make the whole system rational.