Most of my beliefs* on controversial issues stem from one of the following:
I voluntarily exposed myself to persuasive media (political magazines, fiction novels, this website) for entertainment purposes.
A certain position is in my self-interest. (I oppose using physically painful aversive stimuli as an autism treatment without informed consent directly from the patient, not only because I conveniently have a moral system that opposes them, but also because I do not want to be electrically shocked without my consent.)
I want to be a member of a political coalition, because they are working towards one of my goals. They also have some other goals that I didn’t originally care about either way, but membership in the coalition required that I self-modify to care more about those goals, so I did so by reading lots of persuasive arguments online until I valued their goal at least slightly.
*The “beliefs” referenced don’t really meet the LW definition of belief, and in terms of local concepts, are much closer to utility function differences. My coalition agrees with other groups about what the consequences of [action X] will be, but disagrees about the moral value of those consequences. A lot of members of my coalition make arguments to group A that are equivalent to arguing “You should vote this way, because then more paperclips will exist” to humans. For example, my coalition could have blamed [Organization C] for working with [near-universally politically toxic Group D], but instead they complain about how [Organization C] portrays the world as corresponding to my coalition’s opponents’ beliefs in their advertising materials. This will not persuade anyone who does not already accept their entire argument.
Most of my beliefs* on controversial issues stem from one of the following:
I voluntarily exposed myself to persuasive media (political magazines, fiction novels, this website) for entertainment purposes.
A certain position is in my self-interest. (I oppose using physically painful aversive stimuli as an autism treatment without informed consent directly from the patient, not only because I conveniently have a moral system that opposes them, but also because I do not want to be electrically shocked without my consent.)
I want to be a member of a political coalition, because they are working towards one of my goals. They also have some other goals that I didn’t originally care about either way, but membership in the coalition required that I self-modify to care more about those goals, so I did so by reading lots of persuasive arguments online until I valued their goal at least slightly.
*The “beliefs” referenced don’t really meet the LW definition of belief, and in terms of local concepts, are much closer to utility function differences. My coalition agrees with other groups about what the consequences of [action X] will be, but disagrees about the moral value of those consequences. A lot of members of my coalition make arguments to group A that are equivalent to arguing “You should vote this way, because then more paperclips will exist” to humans. For example, my coalition could have blamed [Organization C] for working with [near-universally politically toxic Group D], but instead they complain about how [Organization C] portrays the world as corresponding to my coalition’s opponents’ beliefs in their advertising materials. This will not persuade anyone who does not already accept their entire argument.