The most grating part was that they relied on entirely naive assumptions. You don’t need to posit ‘don’t change your mind’ bias on the part of Josh Steiber’s peers. Just that none of them were under the misapprehension that they had joined the Salvation Army.
and wish we could enforce the “no politics” guideline more consistently.
Consistently enforced ‘guideline’? Something in there verges on oxymoronic.
Just that none of them were under the misapprehension that they had joined the Salvation Army.
The soldier’s notion that he would not be expected to participate in bloody reprisals and violating other people’s preferences was hopelessly naive historically speaking.
The Iraq example was good and added to the post. I could go either way on the agriculture example. “We could replace you with an unthinking, unquestioning patriot and get the same result” could possibly be “unthinking, unquestioning automaton”, but wouldn’t cause the same feeling for me in the pit of my stomach, the “I really don’t want to produce those results” feeling.
The Iraq example was awful because it is a very charged issue with people lying DEEPLY on both sides. There are a lot of people (myself included) who have been there, and who have either seen the same thing and gotten different impressions (and hence beliefs) about it, or people who have seen very different things and of course come away with different beliefs.
What Stieber did was an example of someone coming to a conclusion that their actions were wrong (not irrational as a large part of why he thought they were wrong was that people around him were acting contrary to his beliefs and their stated beliefs and acting from emotion rather than reason) (as an aside much of his conversion seems to his christian beliefs, which I respect more than most people here seem to) and changing what they were doing because of it at a very expensive cost, however it is a bad example because there are very logical reasons why what he did was wrong and those get in the way of understanding what the author’s point is.
It would be like me arguing that I realized my diet where I got most of my calories from starches and sugars was wrong, so I switched to a diet much heavier in meat and fresh vegetables, and that eating things like soy and wheat, because of things like gluten, phyto-estrogens, and phytic acid, are bad for you. Now, it is true that I recognized a problem, did some research, evaluated the evidence and made changes to my diet. This will be ignored in certain circles in favor of the position that EATING MEAT IS WRONG.
It is hard to get past the position (in my mind) that what Stieber did was wrong, and just deal with the point the author is making—that someone came to a decision and then made a change.
There is also the problem that the Author slightly mis-represents the facts presented in the article. The people in Baghdad didn’t say “Yankee’s go home”—they suggested that they did not want Americans in their part of Baghdad. That is a very different thing.
This is actually a very subtle form of propaganda, and of signaling. It’s very rude.
This must be weighed against the proportion of the audience in whom such a phrase would inspire exactly the opposite reaction (or, more likely, a stronger but opposite one). Though it’s not the phase itself but the associations the phrase triggers that’d do the damage; few people want to be unthinking adherents of anything but many have heard phrases like “unthinking and unquestioning” used to describe their political allies.
No idea what those proportions would be here, though.
I agree that the exercise of converging, based on a consideration of plausible consequences of plausible alternatives, on a set of policy positions that optimally support various clearly articulated sets of values, and doing so with minimal wasted effort and deleterious social side-effects, would be both a valuable exercise in its own right for a community of optimal rationalists, and a compelling demonstration for others of the usefulness of their techniques.
I would encourage any such community that happens to exist to go ahead and do that.
I would be very surprised if this community were able to do it productively, though.
I don’t think you’re right about it being a compelling demonstration of their techniques. People who already agreed precisely with the conclusions drawn might pretend to support them for signalling purposes, and everyone else would be completely alienated.
For my own part, I think that if I saw a community come together to discuss some contentious policy question (moral and legal implications of abortion, say, or of war, or of economic policies that reduce disparities in individual wealth, or what-have-you) and conduct an analysis that seemed to me to avoid the pure-signaling pitfalls that such discussions normally succumb to (which admittedly could just be a sign of very sophisticated signaling), and at the end come out with a statement to the effect that the relevant underlying core value differences seem to be the relative weighting of X, Y, and Z; if X>Y then these policies follow, if Y>X these policies, and so on and so forth, I would find that compelling.
But I could be wrong about my own reaction… I’ve never seen it done, after all, I’m just extrapolating.
And even if I’m right, I could be utterly idiosyncratic.
I used to participate in a forum that was easily 50% trolls by volume and actively encouraged insulting language, and I think I got a more nuanced understanding of politics there than anywhere else in my life. There was a willingness to really delve in to minutia (“So you’d support abortion under X circumstances, but not Y?” “Yes, because of Z!”), which helped. Oddly, though, the active discouragement of civility meant that a normally “heated” debate felt the same as any other conversation there, and it was thus very easy not to feel personally invested in signaling and social standing (and anyone that did try to posture overly much would just be trolled in to oblivion...)
I used to participate in such a forum, politicalfleshfeast.com -- it was composed mainly of exiles from DailyKos. Is this perhaps the same forum you’re talking about?
I find the no politics guideline a bit odd. I mean, shouldn’t a rational humanist arrive at certain political positions? Why not make those explicit?
Politics is nearly all signalling. Positions that send good signals only occasional overlap with positions that are rational.
Also the other apes will bash my head in with a rock so I really need to seem to be right even if I’m wrong. Being right on politics and the other side being wrong is a matter of life and death.
Talking about politics may be mostly signaling, but politics itself-that is, the decisions made and policies enacted-is something else that is really, really important.
If you care about the future of humanity and you have examined the evidence, then you should be concerned about global warming. I don’t understand how that statement should be any more controversial than being concerned about the Singularity.
Talking about politics may be mostly signaling, but politics itself-that is, the decisions made and policies enacted-is something else that is really, really important.
Then I will get back to you as soon as I have meaningful influence over any policies enacted.
Good point. One interesting thing you can do is advocate for or attempt to participate in a revolution: the odds may be very low of succeeding, but the payoff of successfully succeeding could be almost arbitrarily large, and so the expected utility of doing so could be tremendous.
Well, for example, one should oppose the use of torture. Torture is Bad because it in and of itself reduces someone’s utility, and because it is ineffective and even counterproductive as a means of gathering information, and so there isn’t a trade off that could counteract the bad effects of torture.
Hmm. I suspect there’s a tiny little bias, possibly politically influenced, whereby signalling that you are nice implies signalling that you are irrational: naive, woolly-minded, immature, not aware of how the world really works, whatever.
But it is rational for us to oppose torture because public acceptance of torture is positively correlated with the risk of members of the public being tortured. And who wants that? It is also negatively correlated with careful, dispassionate, and effective investigation of terrorism and other crimes.
I also oppose it because I love my neighbour, an ethical heuristic I would also defend, but it’s not to the point in this case.
If you could convince people that it’s ineffective and counterproductive, they wouldn’t even need to be rationalists or even humanists in order to oppose it. So your opposition to torture (which I also oppose btw) doesn’t seem like a conclusion that a rationalist is much more likely to arrive at than a non-rationalist—it seems primarily a question of disputed facts, not misapplied logic.
There’s one point that seems to me a failure of rationalism on the part of pro-torture advocates: they seem much more likely to excuse it away in the case of foreigners being tortured than in the case of their own countrymen. If the potential advantages of torture are so big, shouldn’t native crimebosses and crooks also be tortured for information? This to me is evidence that racism/tribal hostility is part of the reason that they tolerate the application of torture to people of other nations.
Btw, I find “reduces someone’s utility” a very VERY silly way to say “it hurts people”.
It would be trivial for me to construct a hypothetical where torture is unambiguously a good idea. It wouldn’t even be hard to make it seem a realistic situation; I might even be able to use a historical example. To call something generally irrational, or to claim that rationality is opposed to a thing, you have to make the argument that in principle it’s not possible for this to be either a terminal goal or the only available instrumental goal.
I think the original claim was that political opposition to torture was rational, assuming we are talking about the use of torture by the state to investigate crimes or coerce the population, domestic or abroad. That’s a less strong claim, and fairly reasonable as long as you allow for the unstated assumptions.
It would be trivial for me to construct a hypothetical where torture is unambiguously a good idea.
I’d be really curious to see this example, given that it’s an established fact that torture straight up doesn’t work as a means of gathering information.
Torturing someone to scare others into compliance.
To make it realistic: enemy soldiers captured as prisoners of war. In order to keep them from staging a breakout and slaughtering the civilians in the large town you’re defending, you torture the ringleader of the attempt—publically and painfully sending a message.
Historically: Keelhauling for mutineers on sea vessels.
Unconvincing. You haven’t demonstrated that torture will result in the best outcome, even in a hypothetical situation where the participants are already Doing It Badly Wrong.
I found the use of political examples grating, and wish we could enforce the “no politics” guideline more consistently.
The most grating part was that they relied on entirely naive assumptions. You don’t need to posit ‘don’t change your mind’ bias on the part of Josh Steiber’s peers. Just that none of them were under the misapprehension that they had joined the Salvation Army.
Consistently enforced ‘guideline’? Something in there verges on oxymoronic.
The soldier’s notion that he would not be expected to participate in bloody reprisals and violating other people’s preferences was hopelessly naive historically speaking.
Fair enough; when I edited “rule” to “guideline” I should also have edited “enforce” to “follow”.
Now that is a sentiment that I can endorse.
The Iraq example was good and added to the post. I could go either way on the agriculture example. “We could replace you with an unthinking, unquestioning patriot and get the same result” could possibly be “unthinking, unquestioning automaton”, but wouldn’t cause the same feeling for me in the pit of my stomach, the “I really don’t want to produce those results” feeling.
The Iraq example was awful because it is a very charged issue with people lying DEEPLY on both sides. There are a lot of people (myself included) who have been there, and who have either seen the same thing and gotten different impressions (and hence beliefs) about it, or people who have seen very different things and of course come away with different beliefs.
What Stieber did was an example of someone coming to a conclusion that their actions were wrong (not irrational as a large part of why he thought they were wrong was that people around him were acting contrary to his beliefs and their stated beliefs and acting from emotion rather than reason) (as an aside much of his conversion seems to his christian beliefs, which I respect more than most people here seem to) and changing what they were doing because of it at a very expensive cost, however it is a bad example because there are very logical reasons why what he did was wrong and those get in the way of understanding what the author’s point is.
It would be like me arguing that I realized my diet where I got most of my calories from starches and sugars was wrong, so I switched to a diet much heavier in meat and fresh vegetables, and that eating things like soy and wheat, because of things like gluten, phyto-estrogens, and phytic acid, are bad for you. Now, it is true that I recognized a problem, did some research, evaluated the evidence and made changes to my diet. This will be ignored in certain circles in favor of the position that EATING MEAT IS WRONG.
It is hard to get past the position (in my mind) that what Stieber did was wrong, and just deal with the point the author is making—that someone came to a decision and then made a change.
There is also the problem that the Author slightly mis-represents the facts presented in the article. The people in Baghdad didn’t say “Yankee’s go home”—they suggested that they did not want Americans in their part of Baghdad. That is a very different thing.
This is actually a very subtle form of propaganda, and of signaling. It’s very rude.
(edited to fix a grammatical error)
This must be weighed against the proportion of the audience in whom such a phrase would inspire exactly the opposite reaction (or, more likely, a stronger but opposite one). Though it’s not the phase itself but the associations the phrase triggers that’d do the damage; few people want to be unthinking adherents of anything but many have heard phrases like “unthinking and unquestioning” used to describe their political allies.
No idea what those proportions would be here, though.
I find the no politics guideline a bit odd. I mean, shouldn’t a rational humanist arrive at certain political positions? Why not make those explicit?
I agree that the exercise of converging, based on a consideration of plausible consequences of plausible alternatives, on a set of policy positions that optimally support various clearly articulated sets of values, and doing so with minimal wasted effort and deleterious social side-effects, would be both a valuable exercise in its own right for a community of optimal rationalists, and a compelling demonstration for others of the usefulness of their techniques.
I would encourage any such community that happens to exist to go ahead and do that.
I would be very surprised if this community were able to do it productively, though.
I don’t think you’re right about it being a compelling demonstration of their techniques. People who already agreed precisely with the conclusions drawn might pretend to support them for signalling purposes, and everyone else would be completely alienated.
That’s certainly a possibility, yes.
For my own part, I think that if I saw a community come together to discuss some contentious policy question (moral and legal implications of abortion, say, or of war, or of economic policies that reduce disparities in individual wealth, or what-have-you) and conduct an analysis that seemed to me to avoid the pure-signaling pitfalls that such discussions normally succumb to (which admittedly could just be a sign of very sophisticated signaling), and at the end come out with a statement to the effect that the relevant underlying core value differences seem to be the relative weighting of X, Y, and Z; if X>Y then these policies follow, if Y>X these policies, and so on and so forth, I would find that compelling.
But I could be wrong about my own reaction… I’ve never seen it done, after all, I’m just extrapolating.
And even if I’m right, I could be utterly idiosyncratic.
I used to participate in a forum that was easily 50% trolls by volume and actively encouraged insulting language, and I think I got a more nuanced understanding of politics there than anywhere else in my life. There was a willingness to really delve in to minutia (“So you’d support abortion under X circumstances, but not Y?” “Yes, because of Z!”), which helped. Oddly, though, the active discouragement of civility meant that a normally “heated” debate felt the same as any other conversation there, and it was thus very easy not to feel personally invested in signaling and social standing (and anyone that did try to posture overly much would just be trolled in to oblivion...)
I used to participate in such a forum, politicalfleshfeast.com -- it was composed mainly of exiles from DailyKos. Is this perhaps the same forum you’re talking about?
Politics is nearly all signalling. Positions that send good signals only occasional overlap with positions that are rational.
Also the other apes will bash my head in with a rock so I really need to seem to be right even if I’m wrong. Being right on politics and the other side being wrong is a matter of life and death.
Talking about politics may be mostly signaling, but politics itself-that is, the decisions made and policies enacted-is something else that is really, really important.
If you care about the future of humanity and you have examined the evidence, then you should be concerned about global warming. I don’t understand how that statement should be any more controversial than being concerned about the Singularity.
Then I will get back to you as soon as I have meaningful influence over any policies enacted.
Good point. One interesting thing you can do is advocate for or attempt to participate in a revolution: the odds may be very low of succeeding, but the payoff of successfully succeeding could be almost arbitrarily large, and so the expected utility of doing so could be tremendous.
One would think so, but there seem to be many libertarians here.
Upvoted for self-aware irony.
Which certain political positions did you have in mind?
Well, for example, one should oppose the use of torture. Torture is Bad because it in and of itself reduces someone’s utility, and because it is ineffective and even counterproductive as a means of gathering information, and so there isn’t a trade off that could counteract the bad effects of torture.
The word you are looking for is ‘nice’, not ‘rational’.
Hmm. I suspect there’s a tiny little bias, possibly politically influenced, whereby signalling that you are nice implies signalling that you are irrational: naive, woolly-minded, immature, not aware of how the world really works, whatever.
But it is rational for us to oppose torture because public acceptance of torture is positively correlated with the risk of members of the public being tortured. And who wants that? It is also negatively correlated with careful, dispassionate, and effective investigation of terrorism and other crimes.
I also oppose it because I love my neighbour, an ethical heuristic I would also defend, but it’s not to the point in this case.
That was assumed when I said that the person we’re describing is a humanist.
I suppose then that the site that your conclusion would apply to would be humanistcommunity.org, not lesswrong. ;)
If you could convince people that it’s ineffective and counterproductive, they wouldn’t even need to be rationalists or even humanists in order to oppose it. So your opposition to torture (which I also oppose btw) doesn’t seem like a conclusion that a rationalist is much more likely to arrive at than a non-rationalist—it seems primarily a question of disputed facts, not misapplied logic.
There’s one point that seems to me a failure of rationalism on the part of pro-torture advocates: they seem much more likely to excuse it away in the case of foreigners being tortured than in the case of their own countrymen. If the potential advantages of torture are so big, shouldn’t native crimebosses and crooks also be tortured for information? This to me is evidence that racism/tribal hostility is part of the reason that they tolerate the application of torture to people of other nations.
Btw, I find “reduces someone’s utility” a very VERY silly way to say “it hurts people”.
Indeed, as revealed preferences show us that not torturing people reduces many people’s utility. It is a stretch to say it hurts them, however.
It would be trivial for me to construct a hypothetical where torture is unambiguously a good idea. It wouldn’t even be hard to make it seem a realistic situation; I might even be able to use a historical example. To call something generally irrational, or to claim that rationality is opposed to a thing, you have to make the argument that in principle it’s not possible for this to be either a terminal goal or the only available instrumental goal.
I think the original claim was that political opposition to torture was rational, assuming we are talking about the use of torture by the state to investigate crimes or coerce the population, domestic or abroad. That’s a less strong claim, and fairly reasonable as long as you allow for the unstated assumptions.
A much stronger claim, IMO
I’d be really curious to see this example, given that it’s an established fact that torture straight up doesn’t work as a means of gathering information.
Torturing someone to scare others into compliance.
To make it realistic: enemy soldiers captured as prisoners of war. In order to keep them from staging a breakout and slaughtering the civilians in the large town you’re defending, you torture the ringleader of the attempt—publically and painfully sending a message.
Historically: Keelhauling for mutineers on sea vessels.
Unconvincing. You haven’t demonstrated that torture will result in the best outcome, even in a hypothetical situation where the participants are already Doing It Badly Wrong.
He did demonstrate that bgaesop’s reported fact applies in a limited domain, and that torture supposedly has other uses.