I think the reason the values/biases you described (risk aversion, justice, responsibility) initially caused you confusion is that all of them are (as other commenters pointed out) very similar to behaviors a calculating consequentialist would use to achieve its values, even if it lacked them. For instance, a consequentialist with strong desires for love and beauty, but no desire for justice, would still behave somewhat similarly to a consequentialist with a desire for justice, because it sees how taking action to deter negative behaviors by other agents will help it achieve values such as love and beauty.
It seems like this is a case where evolution gave us a double-dose. It gave us consequentialist brains to reason out how to achieve our values (which maximized IGF in the AE, of course) but just in case we were to dumb to figure it out, it made certain consequentialist heuristics (seek justice, don’t take stupid chances) terminal values too.
Where it gets confusing is that this means that these values are uniquely conducive to our brain’s rationalization generating engine.. Your brain, when asked why you are trying to achieve justice, could either spit out “to deter bad behavior” or “because I desire justice” and both would be true. Hence the initial confusion over whether these values/biases wee terminal or instrumental.
Where it gets confusing is that this means that these values are uniquely conducive to our brain’s rationalization generating engine.. Your brain, when asked why you are trying to achieve justice, could either spit out “to deter bad behavior” or “because I desire justice” and both would be true. Hence the initial confusion over whether these values/biases wee terminal or instrumental.
I think the reason the values/biases you described (risk aversion, justice, responsibility) initially caused you confusion is that all of them are (as other commenters pointed out) very similar to behaviors a calculating consequentialist would use to achieve its values, even if it lacked them. For instance, a consequentialist with strong desires for love and beauty, but no desire for justice, would still behave somewhat similarly to a consequentialist with a desire for justice, because it sees how taking action to deter negative behaviors by other agents will help it achieve values such as love and beauty.
It seems like this is a case where evolution gave us a double-dose. It gave us consequentialist brains to reason out how to achieve our values (which maximized IGF in the AE, of course) but just in case we were to dumb to figure it out, it made certain consequentialist heuristics (seek justice, don’t take stupid chances) terminal values too.
Where it gets confusing is that this means that these values are uniquely conducive to our brain’s rationalization generating engine.. Your brain, when asked why you are trying to achieve justice, could either spit out “to deter bad behavior” or “because I desire justice” and both would be true. Hence the initial confusion over whether these values/biases wee terminal or instrumental.
Why couldn’t the same be said about love or beauty?
Yes, well said.