To answer the question I think I can answer—The way to change social norms is to perform the social norm you would like instead, in violation of the established norms.
So #1 on my list. OK.
This requires very sophisticated understanding of what the current norms are. If you partially violate a norm, that can sometimes strengthen the norm. Sometimes, apparently unrelated norms reinforce each other. Sometimes, the norm is just too strong and you end up being rejected from the community.
This is scary. Have any advice for how to model the situation correctly such that I don’t do something counterproductive?
I’m confused about how you create a moral system with[out] a concept of responsibility, but I suffer from the obvious bias that my moral system has “responsibility” as a foundational concept.
I must say I am just as baffled by you. I guess you could say I subscribe to the consequentialist heroic responsibility idea that all instances of myself are ultimately “responsible” for everything that goes on in the universe, in the sense that there is nothing that is “not my responsibility”. Then the interesting question is “where can I do the most good for the things I am responsible for?”, not “what am I responsible for?”
I think having responsibility as fundamental creates a problem where you sometimes mark yourself as “not responsible” for something you could do a lot to fix, or mark yourself as “responsible” for something you can’t affect.
creates a problem where you sometimes mark yourself as “not responsible” for something you could do a lot to fix
I agree that this is fundamentally what is occurring with most society-level injustices. Not sure why you think this is more likely a problem for my ethical structure than yours. Mostly likely the misunderstanding is on my end.
Can all deontologists be dutch booked? Then it means something other than what I’m thinking of. (unless I’m confused. I haven’t though this thru)
Not all consequentialists choose torture either. (in duck specks I assume). Pretty sure all utilitarians do tho.
The way I’m using those words is essentially consequentialism=expected utility maximization with a utility function that does not prescribe specific behaviours or thought patterns. and deontologism=holding some non-EU set of ethical/behavioural rules as fundamental (usally stuff like “moral duty to do X in Y situation” and whatnot)
As far as I can tell, everyone who thinks suffering is additive is obligated to choose torture. Only if one denies that suffering can always be compared in an additive way is one free to reject torture and choose specks.
That means one’s evaluations of degree of suffering inherently have a discontinuity somewhere. Thus, one is vulnerable to being dutch-booked/money-pumped by a sufficiently powerful and cruel adversary.
If this discussion about the possible additive nature of suffering/utility is alien to one’s moral reasoning, one might be able to escape the dilemma.
I’m not sure I understand why you use the word “discontinuity” here. In mathematical language, it’s easy to have a continuous function of perpetually-rising value that never reaches a certain value—just put an asymptote.
If instances of dust specks are being counted in this manner, it’s pretty easy to have the asymptote always be inferior to the torture-time.
...but I’m probably misunderstanding part of the discussion, on second thought.
So #1 on my list. OK.
This is scary. Have any advice for how to model the situation correctly such that I don’t do something counterproductive?
I must say I am just as baffled by you. I guess you could say I subscribe to the consequentialist heroic responsibility idea that all instances of myself are ultimately “responsible” for everything that goes on in the universe, in the sense that there is nothing that is “not my responsibility”. Then the interesting question is “where can I do the most good for the things I am responsible for?”, not “what am I responsible for?”
I think having responsibility as fundamental creates a problem where you sometimes mark yourself as “not responsible” for something you could do a lot to fix, or mark yourself as “responsible” for something you can’t affect.
I agree that this is fundamentally what is occurring with most society-level injustices. Not sure why you think this is more likely a problem for my ethical structure than yours. Mostly likely the misunderstanding is on my end.
I don’t know what your ethical structure is, except very vaguely.
I think my heroic consequentialism ethics have no exploits like that, and that any system that disagrees will have such problems.
Is this a reference to the “All deontologists can be dutch-booked, all consequentialists choose torture” issue?
Can all deontologists be dutch booked? Then it means something other than what I’m thinking of. (unless I’m confused. I haven’t though this thru)
Not all consequentialists choose torture either. (in duck specks I assume). Pretty sure all utilitarians do tho.
The way I’m using those words is essentially consequentialism=expected utility maximization with a utility function that does not prescribe specific behaviours or thought patterns. and deontologism=holding some non-EU set of ethical/behavioural rules as fundamental (usally stuff like “moral duty to do X in Y situation” and whatnot)
As far as I can tell, everyone who thinks suffering is additive is obligated to choose torture. Only if one denies that suffering can always be compared in an additive way is one free to reject torture and choose specks.
That means one’s evaluations of degree of suffering inherently have a discontinuity somewhere. Thus, one is vulnerable to being dutch-booked/money-pumped by a sufficiently powerful and cruel adversary.
If this discussion about the possible additive nature of suffering/utility is alien to one’s moral reasoning, one might be able to escape the dilemma.
I’m not sure I understand why you use the word “discontinuity” here. In mathematical language, it’s easy to have a continuous function of perpetually-rising value that never reaches a certain value—just put an asymptote.
If instances of dust specks are being counted in this manner, it’s pretty easy to have the asymptote always be inferior to the torture-time.
...but I’m probably misunderstanding part of the discussion, on second thought.
This post makes the point in more detail.
No, your answer is roughly what I was getting at. There is no reason a utility function has to be additive in human suffering.