Sorry for the delay, I just checked this:
I think actual morality tends to systematically bias behaviour and ideas about ‘social’ life which are contrary to fact and create all sorts of personal and interpersonal problems. I also think it gives far too strong a ‘presumption’ towards the benevolence of do-gooders, the sanity of ‘sticking to your guns, come what may’ and the wisdom of the popular.
There is a more general problem with cognitive dissonance and idea-consistency, due to the literal nonsensicality of most moral claims and sentiments. I also see that the alleged ‘gains’ from morality are frequently self-inflated, if not false to begin with; while the alternative—intellectual consistency and a recognition of purposeful action as aimed at subjective satisfaction—is vastly underrated, even by people of a ‘libertarian’ bent.
I also see that the alleged ‘gains’ from morality are frequently self-inflated, if not false to begin with; while the alternative—intellectual consistency and a recognition of purposeful action as aimed at subjective satisfaction—is vastly underrated, even by people of a ‘libertarian’ bent.
* we recognize that the optimal long-term strategy can differ greatly from the optimal one-shot analysis, and
* we have preferences about some anticipated world-states rather than just anticipated mind-states.
In response to this I say there is nothing about subjectivist satisfaction which prevents taking these (or anything else) into consideration. Further, I do not mean this in a utility-function sense, but rather ‘actual wants derived from valuation forecasts which result in intentionality’.
Sorry for the delay, I just checked this: I think actual morality tends to systematically bias behaviour and ideas about ‘social’ life which are contrary to fact and create all sorts of personal and interpersonal problems. I also think it gives far too strong a ‘presumption’ towards the benevolence of do-gooders, the sanity of ‘sticking to your guns, come what may’ and the wisdom of the popular.
There is a more general problem with cognitive dissonance and idea-consistency, due to the literal nonsensicality of most moral claims and sentiments. I also see that the alleged ‘gains’ from morality are frequently self-inflated, if not false to begin with; while the alternative—intellectual consistency and a recognition of purposeful action as aimed at subjective satisfaction—is vastly underrated, even by people of a ‘libertarian’ bent.
Most of this is less controversial here than elsewhere, with the exception of the reduction of all our goals to “subjective satisfaction”. Many LWers aspire to rational pursuit of our preferences, but with the important distinctions that
we recognize that the optimal long-term strategy can differ greatly from the optimal one-shot analysis, and
we have preferences about some anticipated world-states rather than just anticipated mind-states.
In response to this I say there is nothing about subjectivist satisfaction which prevents taking these (or anything else) into consideration. Further, I do not mean this in a utility-function sense, but rather ‘actual wants derived from valuation forecasts which result in intentionality’.
OK, I don’t understand that either.
Which is right and proper.