I agree that the exercise of converging, based on a consideration of plausible consequences of plausible alternatives, on a set of policy positions that optimally support various clearly articulated sets of values, and doing so with minimal wasted effort and deleterious social side-effects, would be both a valuable exercise in its own right for a community of optimal rationalists, and a compelling demonstration for others of the usefulness of their techniques.
I would encourage any such community that happens to exist to go ahead and do that.
I would be very surprised if this community were able to do it productively, though.
I don’t think you’re right about it being a compelling demonstration of their techniques. People who already agreed precisely with the conclusions drawn might pretend to support them for signalling purposes, and everyone else would be completely alienated.
For my own part, I think that if I saw a community come together to discuss some contentious policy question (moral and legal implications of abortion, say, or of war, or of economic policies that reduce disparities in individual wealth, or what-have-you) and conduct an analysis that seemed to me to avoid the pure-signaling pitfalls that such discussions normally succumb to (which admittedly could just be a sign of very sophisticated signaling), and at the end come out with a statement to the effect that the relevant underlying core value differences seem to be the relative weighting of X, Y, and Z; if X>Y then these policies follow, if Y>X these policies, and so on and so forth, I would find that compelling.
But I could be wrong about my own reaction… I’ve never seen it done, after all, I’m just extrapolating.
And even if I’m right, I could be utterly idiosyncratic.
I used to participate in a forum that was easily 50% trolls by volume and actively encouraged insulting language, and I think I got a more nuanced understanding of politics there than anywhere else in my life. There was a willingness to really delve in to minutia (“So you’d support abortion under X circumstances, but not Y?” “Yes, because of Z!”), which helped. Oddly, though, the active discouragement of civility meant that a normally “heated” debate felt the same as any other conversation there, and it was thus very easy not to feel personally invested in signaling and social standing (and anyone that did try to posture overly much would just be trolled in to oblivion...)
I used to participate in such a forum, politicalfleshfeast.com -- it was composed mainly of exiles from DailyKos. Is this perhaps the same forum you’re talking about?
I agree that the exercise of converging, based on a consideration of plausible consequences of plausible alternatives, on a set of policy positions that optimally support various clearly articulated sets of values, and doing so with minimal wasted effort and deleterious social side-effects, would be both a valuable exercise in its own right for a community of optimal rationalists, and a compelling demonstration for others of the usefulness of their techniques.
I would encourage any such community that happens to exist to go ahead and do that.
I would be very surprised if this community were able to do it productively, though.
I don’t think you’re right about it being a compelling demonstration of their techniques. People who already agreed precisely with the conclusions drawn might pretend to support them for signalling purposes, and everyone else would be completely alienated.
That’s certainly a possibility, yes.
For my own part, I think that if I saw a community come together to discuss some contentious policy question (moral and legal implications of abortion, say, or of war, or of economic policies that reduce disparities in individual wealth, or what-have-you) and conduct an analysis that seemed to me to avoid the pure-signaling pitfalls that such discussions normally succumb to (which admittedly could just be a sign of very sophisticated signaling), and at the end come out with a statement to the effect that the relevant underlying core value differences seem to be the relative weighting of X, Y, and Z; if X>Y then these policies follow, if Y>X these policies, and so on and so forth, I would find that compelling.
But I could be wrong about my own reaction… I’ve never seen it done, after all, I’m just extrapolating.
And even if I’m right, I could be utterly idiosyncratic.
I used to participate in a forum that was easily 50% trolls by volume and actively encouraged insulting language, and I think I got a more nuanced understanding of politics there than anywhere else in my life. There was a willingness to really delve in to minutia (“So you’d support abortion under X circumstances, but not Y?” “Yes, because of Z!”), which helped. Oddly, though, the active discouragement of civility meant that a normally “heated” debate felt the same as any other conversation there, and it was thus very easy not to feel personally invested in signaling and social standing (and anyone that did try to posture overly much would just be trolled in to oblivion...)
I used to participate in such a forum, politicalfleshfeast.com -- it was composed mainly of exiles from DailyKos. Is this perhaps the same forum you’re talking about?