My answer has long been an unequivocal “no”, on the grounds that I don’t see why it would be, and so “hurting people is bad” doesn’t get any exceptions it doesn’t need.
That’s the conclusion I keep coming to, but I have trouble justifying this to others. It’s just such an obvious built-in response that bad people deserve to be unhappy. I guess the inferential difference is too high.
Follow Up: What is your opinion of prisons? How unpleasant should they be?
Is the answer to the second question something like “the unpleasantness with the best [unpleasantness] to [efficacy in discouraging antisocial behavior] ratio, while favoring ratios with low unpleasantness and high discouragement”? (Feel free to tell me that the above sentence in unintelligible).
I think there are a number of issues that go into prison design. The glib answer is, “whatever produces the best outcomes,” but I understand that leaving it at that is profoundly unsatisfying. I don’t have the background in the domain to give a detailed answer, but I have some thoughts about things worth considering.
I generally take “unpleasant” to mean strongly “not liked” at the time. There is, however, a distinction between liking and wanting, in terms of how our brains deal with these things. For deterrence, we want the situation to be “not wanted”—how much people dislike being in jail while actually in jail is irrelevant.
It is also worth noting that both perceived degree of punishment and perceived likelihood of punishment matter.
For deterrence, we want the situation to be “not wanted”—how much people dislike being in jail while actually in jail is irrelevant.
A consequence of this that just occurred to me (and obviously, I’ve not chewed on it long so I expect there are some holes):
In some circumstances, we may make jail a stronger deterrent by making it more pleasant.
Consider, for instance, if jail time is being used to signal toughness and thereby acquire status in a given peer group. Cop shows and the like occasionally portray this kind of thing (particularly with musicians wishing to establish credibility—I think Bones did this more than once). The more prisoners are seen as abused, the stronger the signal. If prisoners are seen as pampered, that doesn’t work so well. I have no idea how much this hypothetical corresponds to reality in the first place, however, or under what circumstances this effect would dominate compared to countervailing pressures.
I was using an applause light? Is there a better way to term that my opinions on this matter seem really weird to people who have never heard of consequentialism and don’t spend much time thinking about the nature of morality (though neither do I, really)?
I think that signaling, “See, I read the sequences!” was not 0% of your motivation in phrasing that way. I don’t actually think it’s a big problem. I don’t think it was all that significant a portion of your motivation, or I would have commented directly.
I actually think that the marking of it as a semantic stop-sign was incorrect; while the phrase, “the inferential distance is too high” could certainly be used that way, it was a tangential issue you (as I read it) were putting on hold, not washing your hands of. What would your response have been, if someone had responded with a request to look at ways to shrink the inferential distance? I therefore think Oscar’s post is more of an applause light—he could have more usefully engaged, and instead chose to simply quote scripture at you.
The fact that there was one comment which contained short snippets by two different posters that amounted to basically nothing but a reference into the sequences each seemed worth commenting on. And what better way than to make the situation worse?
I think that signaling, “See, I read the sequences!” was not 0% of your motivation in phrasing that way.
That’s probably fair. More than “See, I read the sequences!”, it was probably something like “Look, I fit in with you guys because we know the same obscure terms! And since I consider LW posters who seem smart high status this makes me high status by association!”. I didn’t verbally think that, of course, but still.
I don’t think it fits completely. I wasn’t trying to completely write off my inability to defend this view with others (it probably also has to do with the fact that my ideas aren’t fully formed) and I think the phrase does convey information. It means that the people I was referring to don’t have the background knowledge (mainly consequentialism) to make my views seem reasonable. Hence, high inferential difference.
My answer has long been an unequivocal “no”, on the grounds that I don’t see why it would be, and so “hurting people is bad” doesn’t get any exceptions it doesn’t need.
That’s the conclusion I keep coming to, but I have trouble justifying this to others. It’s just such an obvious built-in response that bad people deserve to be unhappy. I guess the inferential difference is too high.
Follow Up: What is your opinion of prisons? How unpleasant should they be?
Is the answer to the second question something like “the unpleasantness with the best [unpleasantness] to [efficacy in discouraging antisocial behavior] ratio, while favoring ratios with low unpleasantness and high discouragement”? (Feel free to tell me that the above sentence in unintelligible).
I think there are a number of issues that go into prison design. The glib answer is, “whatever produces the best outcomes,” but I understand that leaving it at that is profoundly unsatisfying. I don’t have the background in the domain to give a detailed answer, but I have some thoughts about things worth considering.
I generally take “unpleasant” to mean strongly “not liked” at the time. There is, however, a distinction between liking and wanting, in terms of how our brains deal with these things. For deterrence, we want the situation to be “not wanted”—how much people dislike being in jail while actually in jail is irrelevant.
It is also worth noting that both perceived degree of punishment and perceived likelihood of punishment matter.
A consequence of this that just occurred to me (and obviously, I’ve not chewed on it long so I expect there are some holes):
In some circumstances, we may make jail a stronger deterrent by making it more pleasant.
Consider, for instance, if jail time is being used to signal toughness and thereby acquire status in a given peer group. Cop shows and the like occasionally portray this kind of thing (particularly with musicians wishing to establish credibility—I think Bones did this more than once). The more prisoners are seen as abused, the stronger the signal. If prisoners are seen as pampered, that doesn’t work so well. I have no idea how much this hypothetical corresponds to reality in the first place, however, or under what circumstances this effect would dominate compared to countervailing pressures.
Slightly more glib: “Whatever produces the best outcomes for the decision maker”.
Thanks. That makes a ton of sense.
Semantic stop-sign alert!
Applause lights?
I was using an applause light? Is there a better way to term that my opinions on this matter seem really weird to people who have never heard of consequentialism and don’t spend much time thinking about the nature of morality (though neither do I, really)?
I think that signaling, “See, I read the sequences!” was not 0% of your motivation in phrasing that way. I don’t actually think it’s a big problem. I don’t think it was all that significant a portion of your motivation, or I would have commented directly.
I actually think that the marking of it as a semantic stop-sign was incorrect; while the phrase, “the inferential distance is too high” could certainly be used that way, it was a tangential issue you (as I read it) were putting on hold, not washing your hands of. What would your response have been, if someone had responded with a request to look at ways to shrink the inferential distance? I therefore think Oscar’s post is more of an applause light—he could have more usefully engaged, and instead chose to simply quote scripture at you.
The fact that there was one comment which contained short snippets by two different posters that amounted to basically nothing but a reference into the sequences each seemed worth commenting on. And what better way than to make the situation worse?
That’s probably fair. More than “See, I read the sequences!”, it was probably something like “Look, I fit in with you guys because we know the same obscure terms! And since I consider LW posters who seem smart high status this makes me high status by association!”. I didn’t verbally think that, of course, but still.
I don’t think it fits completely. I wasn’t trying to completely write off my inability to defend this view with others (it probably also has to do with the fact that my ideas aren’t fully formed) and I think the phrase does convey information. It means that the people I was referring to don’t have the background knowledge (mainly consequentialism) to make my views seem reasonable. Hence, high inferential difference.
False positive. (Does not appear to be a semantic stop sign.)