I agree that the exercise of converging, based on a consideration of plausible consequences of plausible alternatives, on a set of policy positions that optimally support various clearly articulated sets of values, and doing so with minimal wasted effort and deleterious social side-effects, would be both a valuable exercise in its own right for a community of optimal rationalists, and a compelling demonstration for others of the usefulness of their techniques.
I would encourage any such community that happens to exist to go ahead and do that.
I would be very surprised if this community were able to do it productively, though.
I don’t think you’re right about it being a compelling demonstration of their techniques. People who already agreed precisely with the conclusions drawn might pretend to support them for signalling purposes, and everyone else would be completely alienated.
For my own part, I think that if I saw a community come together to discuss some contentious policy question (moral and legal implications of abortion, say, or of war, or of economic policies that reduce disparities in individual wealth, or what-have-you) and conduct an analysis that seemed to me to avoid the pure-signaling pitfalls that such discussions normally succumb to (which admittedly could just be a sign of very sophisticated signaling), and at the end come out with a statement to the effect that the relevant underlying core value differences seem to be the relative weighting of X, Y, and Z; if X>Y then these policies follow, if Y>X these policies, and so on and so forth, I would find that compelling.
But I could be wrong about my own reaction… I’ve never seen it done, after all, I’m just extrapolating.
And even if I’m right, I could be utterly idiosyncratic.
I used to participate in a forum that was easily 50% trolls by volume and actively encouraged insulting language, and I think I got a more nuanced understanding of politics there than anywhere else in my life. There was a willingness to really delve in to minutia (“So you’d support abortion under X circumstances, but not Y?” “Yes, because of Z!”), which helped. Oddly, though, the active discouragement of civility meant that a normally “heated” debate felt the same as any other conversation there, and it was thus very easy not to feel personally invested in signaling and social standing (and anyone that did try to posture overly much would just be trolled in to oblivion...)
I used to participate in such a forum, politicalfleshfeast.com -- it was composed mainly of exiles from DailyKos. Is this perhaps the same forum you’re talking about?
I find the no politics guideline a bit odd. I mean, shouldn’t a rational humanist arrive at certain political positions? Why not make those explicit?
Politics is nearly all signalling. Positions that send good signals only occasional overlap with positions that are rational.
Also the other apes will bash my head in with a rock so I really need to seem to be right even if I’m wrong. Being right on politics and the other side being wrong is a matter of life and death.
Talking about politics may be mostly signaling, but politics itself-that is, the decisions made and policies enacted-is something else that is really, really important.
If you care about the future of humanity and you have examined the evidence, then you should be concerned about global warming. I don’t understand how that statement should be any more controversial than being concerned about the Singularity.
Talking about politics may be mostly signaling, but politics itself-that is, the decisions made and policies enacted-is something else that is really, really important.
Then I will get back to you as soon as I have meaningful influence over any policies enacted.
Good point. One interesting thing you can do is advocate for or attempt to participate in a revolution: the odds may be very low of succeeding, but the payoff of successfully succeeding could be almost arbitrarily large, and so the expected utility of doing so could be tremendous.
Well, for example, one should oppose the use of torture. Torture is Bad because it in and of itself reduces someone’s utility, and because it is ineffective and even counterproductive as a means of gathering information, and so there isn’t a trade off that could counteract the bad effects of torture.
Hmm. I suspect there’s a tiny little bias, possibly politically influenced, whereby signalling that you are nice implies signalling that you are irrational: naive, woolly-minded, immature, not aware of how the world really works, whatever.
But it is rational for us to oppose torture because public acceptance of torture is positively correlated with the risk of members of the public being tortured. And who wants that? It is also negatively correlated with careful, dispassionate, and effective investigation of terrorism and other crimes.
I also oppose it because I love my neighbour, an ethical heuristic I would also defend, but it’s not to the point in this case.
If you could convince people that it’s ineffective and counterproductive, they wouldn’t even need to be rationalists or even humanists in order to oppose it. So your opposition to torture (which I also oppose btw) doesn’t seem like a conclusion that a rationalist is much more likely to arrive at than a non-rationalist—it seems primarily a question of disputed facts, not misapplied logic.
There’s one point that seems to me a failure of rationalism on the part of pro-torture advocates: they seem much more likely to excuse it away in the case of foreigners being tortured than in the case of their own countrymen. If the potential advantages of torture are so big, shouldn’t native crimebosses and crooks also be tortured for information? This to me is evidence that racism/tribal hostility is part of the reason that they tolerate the application of torture to people of other nations.
Btw, I find “reduces someone’s utility” a very VERY silly way to say “it hurts people”.
It would be trivial for me to construct a hypothetical where torture is unambiguously a good idea. It wouldn’t even be hard to make it seem a realistic situation; I might even be able to use a historical example. To call something generally irrational, or to claim that rationality is opposed to a thing, you have to make the argument that in principle it’s not possible for this to be either a terminal goal or the only available instrumental goal.
I think the original claim was that political opposition to torture was rational, assuming we are talking about the use of torture by the state to investigate crimes or coerce the population, domestic or abroad. That’s a less strong claim, and fairly reasonable as long as you allow for the unstated assumptions.
It would be trivial for me to construct a hypothetical where torture is unambiguously a good idea.
I’d be really curious to see this example, given that it’s an established fact that torture straight up doesn’t work as a means of gathering information.
Torturing someone to scare others into compliance.
To make it realistic: enemy soldiers captured as prisoners of war. In order to keep them from staging a breakout and slaughtering the civilians in the large town you’re defending, you torture the ringleader of the attempt—publically and painfully sending a message.
Historically: Keelhauling for mutineers on sea vessels.
Unconvincing. You haven’t demonstrated that torture will result in the best outcome, even in a hypothetical situation where the participants are already Doing It Badly Wrong.
I find the no politics guideline a bit odd. I mean, shouldn’t a rational humanist arrive at certain political positions? Why not make those explicit?
I agree that the exercise of converging, based on a consideration of plausible consequences of plausible alternatives, on a set of policy positions that optimally support various clearly articulated sets of values, and doing so with minimal wasted effort and deleterious social side-effects, would be both a valuable exercise in its own right for a community of optimal rationalists, and a compelling demonstration for others of the usefulness of their techniques.
I would encourage any such community that happens to exist to go ahead and do that.
I would be very surprised if this community were able to do it productively, though.
I don’t think you’re right about it being a compelling demonstration of their techniques. People who already agreed precisely with the conclusions drawn might pretend to support them for signalling purposes, and everyone else would be completely alienated.
That’s certainly a possibility, yes.
For my own part, I think that if I saw a community come together to discuss some contentious policy question (moral and legal implications of abortion, say, or of war, or of economic policies that reduce disparities in individual wealth, or what-have-you) and conduct an analysis that seemed to me to avoid the pure-signaling pitfalls that such discussions normally succumb to (which admittedly could just be a sign of very sophisticated signaling), and at the end come out with a statement to the effect that the relevant underlying core value differences seem to be the relative weighting of X, Y, and Z; if X>Y then these policies follow, if Y>X these policies, and so on and so forth, I would find that compelling.
But I could be wrong about my own reaction… I’ve never seen it done, after all, I’m just extrapolating.
And even if I’m right, I could be utterly idiosyncratic.
I used to participate in a forum that was easily 50% trolls by volume and actively encouraged insulting language, and I think I got a more nuanced understanding of politics there than anywhere else in my life. There was a willingness to really delve in to minutia (“So you’d support abortion under X circumstances, but not Y?” “Yes, because of Z!”), which helped. Oddly, though, the active discouragement of civility meant that a normally “heated” debate felt the same as any other conversation there, and it was thus very easy not to feel personally invested in signaling and social standing (and anyone that did try to posture overly much would just be trolled in to oblivion...)
I used to participate in such a forum, politicalfleshfeast.com -- it was composed mainly of exiles from DailyKos. Is this perhaps the same forum you’re talking about?
Politics is nearly all signalling. Positions that send good signals only occasional overlap with positions that are rational.
Also the other apes will bash my head in with a rock so I really need to seem to be right even if I’m wrong. Being right on politics and the other side being wrong is a matter of life and death.
Talking about politics may be mostly signaling, but politics itself-that is, the decisions made and policies enacted-is something else that is really, really important.
If you care about the future of humanity and you have examined the evidence, then you should be concerned about global warming. I don’t understand how that statement should be any more controversial than being concerned about the Singularity.
Then I will get back to you as soon as I have meaningful influence over any policies enacted.
Good point. One interesting thing you can do is advocate for or attempt to participate in a revolution: the odds may be very low of succeeding, but the payoff of successfully succeeding could be almost arbitrarily large, and so the expected utility of doing so could be tremendous.
One would think so, but there seem to be many libertarians here.
Upvoted for self-aware irony.
Which certain political positions did you have in mind?
Well, for example, one should oppose the use of torture. Torture is Bad because it in and of itself reduces someone’s utility, and because it is ineffective and even counterproductive as a means of gathering information, and so there isn’t a trade off that could counteract the bad effects of torture.
The word you are looking for is ‘nice’, not ‘rational’.
Hmm. I suspect there’s a tiny little bias, possibly politically influenced, whereby signalling that you are nice implies signalling that you are irrational: naive, woolly-minded, immature, not aware of how the world really works, whatever.
But it is rational for us to oppose torture because public acceptance of torture is positively correlated with the risk of members of the public being tortured. And who wants that? It is also negatively correlated with careful, dispassionate, and effective investigation of terrorism and other crimes.
I also oppose it because I love my neighbour, an ethical heuristic I would also defend, but it’s not to the point in this case.
That was assumed when I said that the person we’re describing is a humanist.
I suppose then that the site that your conclusion would apply to would be humanistcommunity.org, not lesswrong. ;)
If you could convince people that it’s ineffective and counterproductive, they wouldn’t even need to be rationalists or even humanists in order to oppose it. So your opposition to torture (which I also oppose btw) doesn’t seem like a conclusion that a rationalist is much more likely to arrive at than a non-rationalist—it seems primarily a question of disputed facts, not misapplied logic.
There’s one point that seems to me a failure of rationalism on the part of pro-torture advocates: they seem much more likely to excuse it away in the case of foreigners being tortured than in the case of their own countrymen. If the potential advantages of torture are so big, shouldn’t native crimebosses and crooks also be tortured for information? This to me is evidence that racism/tribal hostility is part of the reason that they tolerate the application of torture to people of other nations.
Btw, I find “reduces someone’s utility” a very VERY silly way to say “it hurts people”.
Indeed, as revealed preferences show us that not torturing people reduces many people’s utility. It is a stretch to say it hurts them, however.
It would be trivial for me to construct a hypothetical where torture is unambiguously a good idea. It wouldn’t even be hard to make it seem a realistic situation; I might even be able to use a historical example. To call something generally irrational, or to claim that rationality is opposed to a thing, you have to make the argument that in principle it’s not possible for this to be either a terminal goal or the only available instrumental goal.
I think the original claim was that political opposition to torture was rational, assuming we are talking about the use of torture by the state to investigate crimes or coerce the population, domestic or abroad. That’s a less strong claim, and fairly reasonable as long as you allow for the unstated assumptions.
A much stronger claim, IMO
I’d be really curious to see this example, given that it’s an established fact that torture straight up doesn’t work as a means of gathering information.
Torturing someone to scare others into compliance.
To make it realistic: enemy soldiers captured as prisoners of war. In order to keep them from staging a breakout and slaughtering the civilians in the large town you’re defending, you torture the ringleader of the attempt—publically and painfully sending a message.
Historically: Keelhauling for mutineers on sea vessels.
Unconvincing. You haven’t demonstrated that torture will result in the best outcome, even in a hypothetical situation where the participants are already Doing It Badly Wrong.
He did demonstrate that bgaesop’s reported fact applies in a limited domain, and that torture supposedly has other uses.