Posts on Less Wrong should focus on getting the goddamned right answer for the right reasons. If the “Less Wrong” and “rationalist” brand names mean anything, they mean that. If something about Snog’s post is wrong—if it proposes beliefs that are false or plans that won’t work, then it should be vigorously critiqued and downvoted.
If the terminology used in the post makes someone, somewhere have negative feelings about the “Less Wrong” brand name? Don’t care; don’t fucking care; can’t afford to care. What does that have to do with maximizing the probability assigned to my observations?
If the terminology used in the post makes someone, somewhere have negative feelings about the “Less Wrong” brand name? Don’t care; don’t fucking care; can’t afford to care.
The person I was referring to is a data scientist and effective altruist with a degree from Oxford who now runs their own business. I’m not claiming that they would be an AI safety researcher if not for associations of LW with sexism – but it’s not even that much of a stretch.
I can respect if you make a utility calculation here that reaches a different result, but the idea that there is no tradeoff or that it’s so obviously one-sided that we shouldn’t be discussing it seems plainly false.
Happy to discuss it. (I feel a little guilty for cussing in a Less Wrong comment, but I am at war with the forces of blandness and it felt appropriate to be forceful.)
Lately, however, I seem to see a lot of people eager to embrace censorship for P.R. reasons, seemingly without noticing or caring that this is a distortionary force on shared maps, as if the Vision was to run whatever marketing algorithm can win the most grant money and lure warm bodies for our robot cult—which I could get behind if I thought money and warm bodies were really the limiting resource for saving the world. But the problem with “systematically correct reasoning except leaving out all the parts of the discussion that might offend someone with a degree from Oxford or Berkeley” as opposed to “systematically correct reasoning” is that the former doesn’t let you get anything right that Oxford or Berkeley gets wrong.
We already have the Frontpage/Personal distinction to reduce visibility of posts that might scare off cognitive children!
Posts on Less Wrong should focus on getting the goddamned right answer for the right reasons. If the “Less Wrong” and “rationalist” brand names mean anything, they mean that. If something about Snog’s post is wrong—if it proposes beliefs that are false or plans that won’t work, then it should be vigorously critiqued and downvoted.
If the terminology used in the post makes someone, somewhere have negative feelings about the “Less Wrong” brand name? Don’t care; don’t fucking care; can’t afford to care. What does that have to do with maximizing the probability assigned to my observations?
The person I was referring to is a data scientist and effective altruist with a degree from Oxford who now runs their own business. I’m not claiming that they would be an AI safety researcher if not for associations of LW with sexism – but it’s not even that much of a stretch.
I can respect if you make a utility calculation here that reaches a different result, but the idea that there is no tradeoff or that it’s so obviously one-sided that we shouldn’t be discussing it seems plainly false.
Happy to discuss it. (I feel a little guilty for cussing in a Less Wrong comment, but I am at war with the forces of blandness and it felt appropriate to be forceful.)
My understanding of the Vision was that we were going to develop methods of systematically correct reasoning the likes of which the world had never seen, which, among other things, would be useful for preventing unaligned superintelligence from destroying all value in the universe.
Lately, however, I seem to see a lot of people eager to embrace censorship for P.R. reasons, seemingly without noticing or caring that this is a distortionary force on shared maps, as if the Vision was to run whatever marketing algorithm can win the most grant money and lure warm bodies for our robot cult—which I could get behind if I thought money and warm bodies were really the limiting resource for saving the world. But the problem with “systematically correct reasoning except leaving out all the parts of the discussion that might offend someone with a degree from Oxford or Berkeley” as opposed to “systematically correct reasoning” is that the former doesn’t let you get anything right that Oxford or Berkeley gets wrong.
Optimized dating advice isn’t important in itself, but the discourse algorithm that’s too cowardly to even think about dating advice is thereby too constrained to do serious thinking about the things that are important.
I’m too confused/unsure right now to respond to this, but I want to assure you that it’s not because I’m ignoring your comment.