This is what I was talking about. Please do prepare the posts, it’ll help you to clarify your position to yourself. Let them lie as drafts for a while, then make a decision about whether to post them. Note that your statements are about the form of human preference computation, not about the utility that computes the “should” following from human preferences. Do you know the derivation of expected utility formula? You refer to a well-known finding that people avoid negative reward more than they seek positive reward.
You refer to a well-known finding that people avoid negative reward more than they seek positive reward.
Well, there is that too, of course, but actually the issues I’m talking about here are (somewhat) orthogonal. Negatively-motivated reasoning is less likely to be rational in large part because it’s more vague—it requires only that the source of negative motivation be dismissed or avoided, rather than a particular source of positive motivation be obtained. Even if negative and positive motivation held the same weight, this issue would still apply.
The literature I was actually referring to (about the largely asynchronous and simultaneous operation of negative and positive motivation), I linked to in another comment here, after you accused me of making unorthodox and unsupported claims. In my posts, I expect to also make reference to at least one paper on “affective synchrony”, which is the degree to which our negative and positive motivation systems activate to the same degree at the same time.
Note that your statements are about the form of human preference computation, not about the utility that computes the “should” following from human preferences.
All I’m pointing out is that a rationalist that ignores the irrationality of the hardware on which their computations are being run, while expecting to get good answers out of it, isn’t being very rational.
This is what I was talking about. Please do prepare the posts, it’ll help you to clarify your position to yourself. Let them lie as drafts for a while, then make a decision about whether to post them. Note that your statements are about the form of human preference computation, not about the utility that computes the “should” following from human preferences. Do you know the derivation of expected utility formula? You refer to a well-known finding that people avoid negative reward more than they seek positive reward.
Well, there is that too, of course, but actually the issues I’m talking about here are (somewhat) orthogonal. Negatively-motivated reasoning is less likely to be rational in large part because it’s more vague—it requires only that the source of negative motivation be dismissed or avoided, rather than a particular source of positive motivation be obtained. Even if negative and positive motivation held the same weight, this issue would still apply.
The literature I was actually referring to (about the largely asynchronous and simultaneous operation of negative and positive motivation), I linked to in another comment here, after you accused me of making unorthodox and unsupported claims. In my posts, I expect to also make reference to at least one paper on “affective synchrony”, which is the degree to which our negative and positive motivation systems activate to the same degree at the same time.
All I’m pointing out is that a rationalist that ignores the irrationality of the hardware on which their computations are being run, while expecting to get good answers out of it, isn’t being very rational.