I would ask what you mean by culpable that does not have the same problems as guilt. (I guess you mean responsibility in some sense?)
But instead, since you bring up that this may need more tabooing, can you explain what you think you and other people who seek to do good things should do about some bad thing happening, and why that’s the best approach?
To kick it off, I hold that [guilt, responsibility, culpability, etc] are features of a badly flawed social/moral protocol that may not be applicable or optimal in this (or any) case. I think moral agents like me and you and anyone else who cares should go back to first principles and derive the correct behavior, which may or may not be [guilt, etc].
The process I (and other instances of the moral process that I represent) should use goes like this: Observe bad things, notice that fixing bad things is a good way to do a good thing, look for my specific leverage in this case, which might be:
ceasing some antisocial behaviour or doing some direct object level intervenion
attempting to spread awareness of the problem to my other instances
attempting to inspire prosocial behavior by hacking the guilt system in humans (as you are doing)
attempting to create more instances of myself by spreading this go-back-to-first-principles idea (as I am doing here)
something else
Whatever Is best out of that, I should evaluate against other interventions in other areas (like getting back to work at making money to donate to SI) to see if it is the easiest way to produce utility. Then I should do whatever is the best.
This process is best because it derives directly from first principles of decision theory, which are known to work quite well.
What do you think of that? If you are in fact doing intervention #3, as you seem to be, you could be a bit more conscious of it and open about it being consequentialist-instrumental and not deontological.
To answer the question I think I can answer—The way to change social norms is to perform the social norm you would like instead, in violation of the established norms.
This requires very sophisticated understanding of what the current norms are. If you partially violate a norm, that can sometimes strengthen the norm. Sometimes, apparently unrelated norms reinforce each other. Sometimes, the norm is just too strong and you end up being rejected from the community.
I’m confused about how you create a moral system with a concept of responsibility, but I suffer from the obvious bias that my moral system has “responsibility” as a foundational concept.
To answer the question I think I can answer—The way to change social norms is to perform the social norm you would like instead, in violation of the established norms.
So #1 on my list. OK.
This requires very sophisticated understanding of what the current norms are. If you partially violate a norm, that can sometimes strengthen the norm. Sometimes, apparently unrelated norms reinforce each other. Sometimes, the norm is just too strong and you end up being rejected from the community.
This is scary. Have any advice for how to model the situation correctly such that I don’t do something counterproductive?
I’m confused about how you create a moral system with[out] a concept of responsibility, but I suffer from the obvious bias that my moral system has “responsibility” as a foundational concept.
I must say I am just as baffled by you. I guess you could say I subscribe to the consequentialist heroic responsibility idea that all instances of myself are ultimately “responsible” for everything that goes on in the universe, in the sense that there is nothing that is “not my responsibility”. Then the interesting question is “where can I do the most good for the things I am responsible for?”, not “what am I responsible for?”
I think having responsibility as fundamental creates a problem where you sometimes mark yourself as “not responsible” for something you could do a lot to fix, or mark yourself as “responsible” for something you can’t affect.
creates a problem where you sometimes mark yourself as “not responsible” for something you could do a lot to fix
I agree that this is fundamentally what is occurring with most society-level injustices. Not sure why you think this is more likely a problem for my ethical structure than yours. Mostly likely the misunderstanding is on my end.
Can all deontologists be dutch booked? Then it means something other than what I’m thinking of. (unless I’m confused. I haven’t though this thru)
Not all consequentialists choose torture either. (in duck specks I assume). Pretty sure all utilitarians do tho.
The way I’m using those words is essentially consequentialism=expected utility maximization with a utility function that does not prescribe specific behaviours or thought patterns. and deontologism=holding some non-EU set of ethical/behavioural rules as fundamental (usally stuff like “moral duty to do X in Y situation” and whatnot)
As far as I can tell, everyone who thinks suffering is additive is obligated to choose torture. Only if one denies that suffering can always be compared in an additive way is one free to reject torture and choose specks.
That means one’s evaluations of degree of suffering inherently have a discontinuity somewhere. Thus, one is vulnerable to being dutch-booked/money-pumped by a sufficiently powerful and cruel adversary.
If this discussion about the possible additive nature of suffering/utility is alien to one’s moral reasoning, one might be able to escape the dilemma.
I’m not sure I understand why you use the word “discontinuity” here. In mathematical language, it’s easy to have a continuous function of perpetually-rising value that never reaches a certain value—just put an asymptote.
If instances of dust specks are being counted in this manner, it’s pretty easy to have the asymptote always be inferior to the torture-time.
...but I’m probably misunderstanding part of the discussion, on second thought.
I suppose culpable is a better word than guilt. I think we’ve having a definitional dispute that is obscuring what we actually disagree about.
I would ask what you mean by culpable that does not have the same problems as guilt. (I guess you mean responsibility in some sense?)
But instead, since you bring up that this may need more tabooing, can you explain what you think you and other people who seek to do good things should do about some bad thing happening, and why that’s the best approach?
To kick it off, I hold that [guilt, responsibility, culpability, etc] are features of a badly flawed social/moral protocol that may not be applicable or optimal in this (or any) case. I think moral agents like me and you and anyone else who cares should go back to first principles and derive the correct behavior, which may or may not be [guilt, etc].
The process I (and other instances of the moral process that I represent) should use goes like this: Observe bad things, notice that fixing bad things is a good way to do a good thing, look for my specific leverage in this case, which might be:
ceasing some antisocial behaviour or doing some direct object level intervenion
attempting to spread awareness of the problem to my other instances
attempting to inspire prosocial behavior by hacking the guilt system in humans (as you are doing)
attempting to create more instances of myself by spreading this go-back-to-first-principles idea (as I am doing here)
something else
Whatever Is best out of that, I should evaluate against other interventions in other areas (like getting back to work at making money to donate to SI) to see if it is the easiest way to produce utility. Then I should do whatever is the best.
This process is best because it derives directly from first principles of decision theory, which are known to work quite well.
What do you think of that? If you are in fact doing intervention #3, as you seem to be, you could be a bit more conscious of it and open about it being consequentialist-instrumental and not deontological.
To answer the question I think I can answer—The way to change social norms is to perform the social norm you would like instead, in violation of the established norms.
This requires very sophisticated understanding of what the current norms are. If you partially violate a norm, that can sometimes strengthen the norm. Sometimes, apparently unrelated norms reinforce each other. Sometimes, the norm is just too strong and you end up being rejected from the community.
I’m confused about how you create a moral system with a concept of responsibility, but I suffer from the obvious bias that my moral system has “responsibility” as a foundational concept.
So #1 on my list. OK.
This is scary. Have any advice for how to model the situation correctly such that I don’t do something counterproductive?
I must say I am just as baffled by you. I guess you could say I subscribe to the consequentialist heroic responsibility idea that all instances of myself are ultimately “responsible” for everything that goes on in the universe, in the sense that there is nothing that is “not my responsibility”. Then the interesting question is “where can I do the most good for the things I am responsible for?”, not “what am I responsible for?”
I think having responsibility as fundamental creates a problem where you sometimes mark yourself as “not responsible” for something you could do a lot to fix, or mark yourself as “responsible” for something you can’t affect.
I agree that this is fundamentally what is occurring with most society-level injustices. Not sure why you think this is more likely a problem for my ethical structure than yours. Mostly likely the misunderstanding is on my end.
I don’t know what your ethical structure is, except very vaguely.
I think my heroic consequentialism ethics have no exploits like that, and that any system that disagrees will have such problems.
Is this a reference to the “All deontologists can be dutch-booked, all consequentialists choose torture” issue?
Can all deontologists be dutch booked? Then it means something other than what I’m thinking of. (unless I’m confused. I haven’t though this thru)
Not all consequentialists choose torture either. (in duck specks I assume). Pretty sure all utilitarians do tho.
The way I’m using those words is essentially consequentialism=expected utility maximization with a utility function that does not prescribe specific behaviours or thought patterns. and deontologism=holding some non-EU set of ethical/behavioural rules as fundamental (usally stuff like “moral duty to do X in Y situation” and whatnot)
As far as I can tell, everyone who thinks suffering is additive is obligated to choose torture. Only if one denies that suffering can always be compared in an additive way is one free to reject torture and choose specks.
That means one’s evaluations of degree of suffering inherently have a discontinuity somewhere. Thus, one is vulnerable to being dutch-booked/money-pumped by a sufficiently powerful and cruel adversary.
If this discussion about the possible additive nature of suffering/utility is alien to one’s moral reasoning, one might be able to escape the dilemma.
I’m not sure I understand why you use the word “discontinuity” here. In mathematical language, it’s easy to have a continuous function of perpetually-rising value that never reaches a certain value—just put an asymptote.
If instances of dust specks are being counted in this manner, it’s pretty easy to have the asymptote always be inferior to the torture-time.
...but I’m probably misunderstanding part of the discussion, on second thought.
This post makes the point in more detail.
No, your answer is roughly what I was getting at. There is no reason a utility function has to be additive in human suffering.