Yet a policy of “poor people should have fewer [X], rich people more” sounds heartless [...]
Indeed it does, any policy proposing new advantages for the rich and disadvantages for the poor sounds heartless, especially if it sounds like it’s intruding in people’s private lives (and the decision of whether to have kids is pretty darn private).
(I would probably tend to be in favor of such a policy, though a lot depends on how exactly it’s implemented, but it’s not very surprising that it sounds heartless; it is, but that doesn’t make it automatically wrong)
I find it funny that the the policy seemingly advantages the rich and disadvantages the poor, but at this time both sides are totally free to go the other way and tend not to. You can talk about problems with access to birth control, but the rich could definitely have more children and do not.
I think “new advantages for the rich and disadvantages for the poor” hits on the problem precisely.
But note that the policy as stated doesn’t actually specify who would be advantaged or hurt by new incentives. The one suggestion that is specified, subsidized contraception, would disadvantage the disproportionately-rich taxpayers and might be a greater advantage to disproportionately-poor users.
Yet it’s perfectly natural to assume that the unspecified policy implementations would end up on net advantaging the rich and disadvantaging the poor, isn’t it? I suspect that even the most anti-libertarian people could give you an intuitive explanation of how regulatory capture works in cases like this.
any policy proposing new advantages for the rich and disadvantages for the poor sounds heartles
This might just be it!
Imagine a policy that disadvantages poor people and advantages rich people, yet ensures nearly everyone is better off because of it and there is less inequality overall. It seems to be the right choice from a utilitarian perspective, yet sound heartless even on the abstract level.
Do other policies of this kind produce similar responses and intuitions?
I would probably tend to be in favor of such a policy, though a lot depends on how exactly it’s implemented
That’s certainly quite a hedge. I think most people are abstractly in favor of protecting people from harm by the actions of violent extremists, but how many folks here, with the benefit of hindsight and accurate information, would pick the War on Terror, or trust the parties responsible for it in similar situations?
If you’re saying that people tend to approve of vague policy proposals, and then once it’s implemented, say “it was obvious that this was going to be a major screw-up!”, then yes, I fully agree—hence my hedge!
It’s still worth saying to help identify in which case there is disagreement about the goal of the policy, or about the implementation details. In this case, I expect most disagreement to be about the goals, not about whether a decent implementation is likely.
What I was saying is “I’m in favor of this, depending on how it’s implemented” feels suspicious, as a statement. Not in terms of your particular motives for saying it, but in terms of the underlying thought process (it’s a phrasing I’ve heard, and used, quite a number of times in a variety of similar contexts). I hadn’t thought about it quite like that until I saw it in this thread, though: the statement is very nearly meaningless. The hedging doesn’t so much seem like a realistic set of qualifications to the statement “I endorse this” because there’s no analysis; it looks a lot more like leaving onself a line of argumentative retreat. “I’m for that, unless it becomes so wildly unpopular later that I feel the need to retract the statement for social signalling reasons.”
The comment about the War on Terror is just me grounding that speculative statement in terms of real, existing means for enacting and enforcing policy in the real world. Those existing means are just loaded with perverse incentives, conflict of interest, signal loss, mission creep and other stuff, such that they can turn even uncontroversial, clearly beneficial goals (“suppress a certain class of risk”) into something Orwellian and nightmarish. So saying “I’m for this, depending on how it’s implemented” seems to ignore that in any realistic case it’s quite likely to be implemented quite badly, which makes the hedge feel more like a line-of-retreat, left instinctively, than a meaningful statement about the limits of your confidence or support for the idea.
the statement is very nearly meaningless. The hedging doesn’t so much seem like a realistic set of qualifications to the statement “I endorse this” because there’s no analysis;
I agree that statements of the form “I’m in favor of a policy towards X, though a lot depends of the implementation” can often be pretty hollow—except that in this case I don’t expect everybody to recognize X as a worthwhile goal, so it’s a way to keep the discussion about abstract goals rather than concrete policies. If people don’t agree about the goals, discussing the policies is premature.
it looks a lot more like leaving onself a line of argumentative retreat. “I’m for that, unless it becomes so wildly unpopular later that I feel the need to retract the statement for social signalling reasons.”
Nah, eugenics are already quite unpopular (though not particularly among online nerds like us) but I find the principle perfectly reasonable, so I don’t think social signaling is playing a huge role here. It’s more like “I’m for that, but of course there can be some bad implementations, so I don’t need someone to come up and say ‘hey but what if there’s a bad implementation?’”, it wouldn’t be a very productive discussion.
So saying “I’m for this, depending on how it’s implemented” seems to ignore that in any realistic case it’s quite likely to be implemented quite badly
Couldn’t the last bit be said about nearly any policy proposal? Plenty of policies that sound good on paper turn out to be trainwrecks, or at least to have a pretty crappy cost-benefit ratio.
There are three basic positions on a policy towards “poor people should have fewer children, rich people more”
A: It’s a valid goal, why isn’t it done already?
B: It’s a valid goal, but the implementation is going to suck, so no.
C: It’s not a valid goal, don’t do it
I expect a large chunk of the public to lean towards C; I lean towards A or B depending of the details, and you seem to lean mostly towards B. I’m vague about A or B not because I want to be able to claim B once it turns out to be unpopular (as far as I can tell, A and B are already unpopular among non-nerds), but because I think the distinction from C is more important, interesting and easy to discuss.
Does that make sense? Have I misunderstood or misrepresented you?
I agree that it isn’t particularly private, except perhaps in the sense that you technically aren’t effecting other people at the time of decision as there aren’t those other people yet.
But, also, private doesn’t mean limited to a solitary individual, or else people wouldn’t speak of sex being private.
I guess I’d define private as an event that one can limit the involvement (including knowledge of) to those of their choosing. Perhaps possible with raising children but not the norm.
Your definition is near to what I think of when I hear “private”, save that I would add that the event must be consensual for all the people involved. That is: “an activity performed by a set of persons can be considered private only if the direct consequences of the activity are limited to those in the set, and the activity is consensual for all the involved”*.
I may be projecting my own moral intuitions, but I think this is the definition that is informally evoked when there is talk of non-intrusion into others’ private lives; in this case, a right for non-intrusion seems morally defensible. However, the problem in my view is that sometimes the meaning of “private” is extended to situations where the right of non-intrusion is no longer so clearly worthy of defense.
*Actually, I think I would prefer to include sentients into the definition, but I doubt that is a mainstream view at the moment.
Well, I’m not going to feel qualified to discuss whether the word as is commonly used connotes justified secrecy & non-intrusion or simply the fact of the matter, but it would be useful to have words for both meanings (or else taboo it and spell out the justification for non-intrusion/investigation when debating whether someone’s privacy is a suitable excuse).
A bit too political for LessWrong in my opinion …
Indeed it does, any policy proposing new advantages for the rich and disadvantages for the poor sounds heartless, especially if it sounds like it’s intruding in people’s private lives (and the decision of whether to have kids is pretty darn private).
(I would probably tend to be in favor of such a policy, though a lot depends on how exactly it’s implemented, but it’s not very surprising that it sounds heartless; it is, but that doesn’t make it automatically wrong)
I find it funny that the the policy seemingly advantages the rich and disadvantages the poor, but at this time both sides are totally free to go the other way and tend not to. You can talk about problems with access to birth control, but the rich could definitely have more children and do not.
I think “new advantages for the rich and disadvantages for the poor” hits on the problem precisely.
But note that the policy as stated doesn’t actually specify who would be advantaged or hurt by new incentives. The one suggestion that is specified, subsidized contraception, would disadvantage the disproportionately-rich taxpayers and might be a greater advantage to disproportionately-poor users.
Yet it’s perfectly natural to assume that the unspecified policy implementations would end up on net advantaging the rich and disadvantaging the poor, isn’t it? I suspect that even the most anti-libertarian people could give you an intuitive explanation of how regulatory capture works in cases like this.
This might just be it!
Imagine a policy that disadvantages poor people and advantages rich people, yet ensures nearly everyone is better off because of it and there is less inequality overall. It seems to be the right choice from a utilitarian perspective, yet sound heartless even on the abstract level.
Do other policies of this kind produce similar responses and intuitions?
That’s certainly quite a hedge. I think most people are abstractly in favor of protecting people from harm by the actions of violent extremists, but how many folks here, with the benefit of hindsight and accurate information, would pick the War on Terror, or trust the parties responsible for it in similar situations?
I’m not sure what your point is exactly.
If you’re saying that people tend to approve of vague policy proposals, and then once it’s implemented, say “it was obvious that this was going to be a major screw-up!”, then yes, I fully agree—hence my hedge!
It’s still worth saying to help identify in which case there is disagreement about the goal of the policy, or about the implementation details. In this case, I expect most disagreement to be about the goals, not about whether a decent implementation is likely.
What I was saying is “I’m in favor of this, depending on how it’s implemented” feels suspicious, as a statement. Not in terms of your particular motives for saying it, but in terms of the underlying thought process (it’s a phrasing I’ve heard, and used, quite a number of times in a variety of similar contexts). I hadn’t thought about it quite like that until I saw it in this thread, though: the statement is very nearly meaningless. The hedging doesn’t so much seem like a realistic set of qualifications to the statement “I endorse this” because there’s no analysis; it looks a lot more like leaving onself a line of argumentative retreat. “I’m for that, unless it becomes so wildly unpopular later that I feel the need to retract the statement for social signalling reasons.”
The comment about the War on Terror is just me grounding that speculative statement in terms of real, existing means for enacting and enforcing policy in the real world. Those existing means are just loaded with perverse incentives, conflict of interest, signal loss, mission creep and other stuff, such that they can turn even uncontroversial, clearly beneficial goals (“suppress a certain class of risk”) into something Orwellian and nightmarish. So saying “I’m for this, depending on how it’s implemented” seems to ignore that in any realistic case it’s quite likely to be implemented quite badly, which makes the hedge feel more like a line-of-retreat, left instinctively, than a meaningful statement about the limits of your confidence or support for the idea.
I agree that statements of the form “I’m in favor of a policy towards X, though a lot depends of the implementation” can often be pretty hollow—except that in this case I don’t expect everybody to recognize X as a worthwhile goal, so it’s a way to keep the discussion about abstract goals rather than concrete policies. If people don’t agree about the goals, discussing the policies is premature.
Nah, eugenics are already quite unpopular (though not particularly among online nerds like us) but I find the principle perfectly reasonable, so I don’t think social signaling is playing a huge role here. It’s more like “I’m for that, but of course there can be some bad implementations, so I don’t need someone to come up and say ‘hey but what if there’s a bad implementation?’”, it wouldn’t be a very productive discussion.
Couldn’t the last bit be said about nearly any policy proposal? Plenty of policies that sound good on paper turn out to be trainwrecks, or at least to have a pretty crappy cost-benefit ratio.
There are three basic positions on a policy towards “poor people should have fewer children, rich people more”
A: It’s a valid goal, why isn’t it done already?
B: It’s a valid goal, but the implementation is going to suck, so no.
C: It’s not a valid goal, don’t do it
I expect a large chunk of the public to lean towards C; I lean towards A or B depending of the details, and you seem to lean mostly towards B. I’m vague about A or B not because I want to be able to claim B once it turns out to be unpopular (as far as I can tell, A and B are already unpopular among non-nerds), but because I think the distinction from C is more important, interesting and easy to discuss.
Does that make sense? Have I misunderstood or misrepresented you?
In which sense is it private? A person having X kids will have affected the lives of at least X other persons.
I agree that it isn’t particularly private, except perhaps in the sense that you technically aren’t effecting other people at the time of decision as there aren’t those other people yet. But, also, private doesn’t mean limited to a solitary individual, or else people wouldn’t speak of sex being private. I guess I’d define private as an event that one can limit the involvement (including knowledge of) to those of their choosing. Perhaps possible with raising children but not the norm.
Your definition is near to what I think of when I hear “private”, save that I would add that the event must be consensual for all the people involved. That is: “an activity performed by a set of persons can be considered private only if the direct consequences of the activity are limited to those in the set, and the activity is consensual for all the involved”*.
I may be projecting my own moral intuitions, but I think this is the definition that is informally evoked when there is talk of non-intrusion into others’ private lives; in this case, a right for non-intrusion seems morally defensible. However, the problem in my view is that sometimes the meaning of “private” is extended to situations where the right of non-intrusion is no longer so clearly worthy of defense.
*Actually, I think I would prefer to include sentients into the definition, but I doubt that is a mainstream view at the moment.
Well, I’m not going to feel qualified to discuss whether the word as is commonly used connotes justified secrecy & non-intrusion or simply the fact of the matter, but it would be useful to have words for both meanings (or else taboo it and spell out the justification for non-intrusion/investigation when debating whether someone’s privacy is a suitable excuse).