Having more contrarians would be bad for the signal to noise ratio on LW, which is already not as high as I’d like it to be. Can we obtain contrarian ideas more cheaply? For example, one could ask Carl Shulman for a list of promising counterarguments to X, rated by strength, and start digging from there. I’d be pretty interested to hear his responses for X=utilitarianism, the Singularity, FAI, or UDT.
I made a post on a personal blog on one of the more significant points against utilitarianism in my view. It’s very rough, but I could cross-post it to Discussion if people wanted.
I really like how you frame the choice between altruism and selfishness as a range of different “original positions” an agent may assume. Thanks a lot, and please do more of this kind of work!
Or designing a mechanism or environment that makes it easier for existent LW contrarians to express their ideas.
(My personal experience is that trying to defend a contrarian position on LW results in a lot of personal cheap shots, unnecessarily-aggressively-phrased counter-affirmations, or needless re-affirmations of the LW consensus. (E.g., I remember one LWer said he was trying to “tar and feather [me] with low-status associations”. He was probably exaggerating, but still.) This stresses me out a lot and causes me to make errors in presentation and communication, and needlessly causes me to become adversarial. Now when discussing contrarian topics I start out adversarial in anticipation of personal cheap shots et cetera. Most of the onus is on me, but still, I think higher general standards or some sideways change in the epistemic environment could make constructive contrarianism a less stressful role for LWers to take up.)
Yes, a list of Carl’s best arguments against standard positions is going to be of vastly higher quality than anything we would be likely to get from the best contrarians we can find.
(FWIW Vassar, Carl, and Rayhawk (in ascending order of apparent neuroticism) are traditionally most associated with constructing steel men. (Or as I think Vassar put it, “steel men, adamantium men, magnetic monopolium men”, respectively.))
If it’s less signal but also less noise, it might be better overall. (And if we can’t work out how to get more contrarians, this might be a useful suggestion anyway.)
Sarcasm is hard to respond to, because I don’t know what your actual position is other than “not-that”.
Mm, on second reading I think you’re right. “Vastly higher quality than anything we would be likely to get from the best contrarians we can find” comes across to me as having too many superlatives to be meant seriously. But “not-sarcastic” fits my model of lukeprog better.
(I was also influenced by it being at −1 when I replied. There’s probably a lesson in contrarianism to be taken from that...)
“Vastly higher quality than anything we would be likely to get from the best contrarians we can find” comes across to me as having too many superlatives to be meant seriously.
Keep in mind that we’re talking about Carl Shulman. If you know the guy it’s pretty obvious that Lukeprog was dead serious.
Having more contrarians would be bad for the signal to noise ratio on LW, which is already not as high as I’d like it to be. Can we obtain contrarian ideas more cheaply? For example, one could ask Carl Shulman for a list of promising counterarguments to X, rated by strength, and start digging from there. I’d be pretty interested to hear his responses for X=utilitarianism, the Singularity, FAI, or UDT.
I made a post on a personal blog on one of the more significant points against utilitarianism in my view. It’s very rough, but I could cross-post it to Discussion if people wanted.
I really like how you frame the choice between altruism and selfishness as a range of different “original positions” an agent may assume. Thanks a lot, and please do more of this kind of work!
To generalize, this suggests re-purposing existing LWers to the role of contrarians, rather than looking for new people.
Or designing a mechanism or environment that makes it easier for existent LW contrarians to express their ideas.
(My personal experience is that trying to defend a contrarian position on LW results in a lot of personal cheap shots, unnecessarily-aggressively-phrased counter-affirmations, or needless re-affirmations of the LW consensus. (E.g., I remember one LWer said he was trying to “tar and feather [me] with low-status associations”. He was probably exaggerating, but still.) This stresses me out a lot and causes me to make errors in presentation and communication, and needlessly causes me to become adversarial. Now when discussing contrarian topics I start out adversarial in anticipation of personal cheap shots et cetera. Most of the onus is on me, but still, I think higher general standards or some sideways change in the epistemic environment could make constructive contrarianism a less stressful role for LWers to take up.)
Require X amount of karma to pay Y amount for an anonymous comment?
Require X amount of karma to pay for Y amount of karma added to your post so that it’s more likely to be seen, or to counteract downvotes?
Yes, a list of Carl’s best arguments against standard positions is going to be of vastly higher quality than anything we would be likely to get from the best contrarians we can find.
(FWIW Vassar, Carl, and Rayhawk (in ascending order of apparent neuroticism) are traditionally most associated with constructing steel men. (Or as I think Vassar put it, “steel men, adamantium men, magnetic monopolium men”, respectively.))
If it’s less signal but also less noise, it might be better overall. (And if we can’t work out how to get more contrarians, this might be a useful suggestion anyway.)
Sarcasm is hard to respond to, because I don’t know what your actual position is other than “not-that”.
I seriously doubt that was sarcasm.
Mm, on second reading I think you’re right. “Vastly higher quality than anything we would be likely to get from the best contrarians we can find” comes across to me as having too many superlatives to be meant seriously. But “not-sarcastic” fits my model of lukeprog better.
(I was also influenced by it being at −1 when I replied. There’s probably a lesson in contrarianism to be taken from that...)
Keep in mind that we’re talking about Carl Shulman. If you know the guy it’s pretty obvious that Lukeprog was dead serious.