Happy to discuss it. (I feel a little guilty for cussing in a Less Wrong comment, but I am at war with the forces of blandness and it felt appropriate to be forceful.)
Lately, however, I seem to see a lot of people eager to embrace censorship for P.R. reasons, seemingly without noticing or caring that this is a distortionary force on shared maps, as if the Vision was to run whatever marketing algorithm can win the most grant money and lure warm bodies for our robot cult—which I could get behind if I thought money and warm bodies were really the limiting resource for saving the world. But the problem with “systematically correct reasoning except leaving out all the parts of the discussion that might offend someone with a degree from Oxford or Berkeley” as opposed to “systematically correct reasoning” is that the former doesn’t let you get anything right that Oxford or Berkeley gets wrong.
Happy to discuss it. (I feel a little guilty for cussing in a Less Wrong comment, but I am at war with the forces of blandness and it felt appropriate to be forceful.)
My understanding of the Vision was that we were going to develop methods of systematically correct reasoning the likes of which the world had never seen, which, among other things, would be useful for preventing unaligned superintelligence from destroying all value in the universe.
Lately, however, I seem to see a lot of people eager to embrace censorship for P.R. reasons, seemingly without noticing or caring that this is a distortionary force on shared maps, as if the Vision was to run whatever marketing algorithm can win the most grant money and lure warm bodies for our robot cult—which I could get behind if I thought money and warm bodies were really the limiting resource for saving the world. But the problem with “systematically correct reasoning except leaving out all the parts of the discussion that might offend someone with a degree from Oxford or Berkeley” as opposed to “systematically correct reasoning” is that the former doesn’t let you get anything right that Oxford or Berkeley gets wrong.
Optimized dating advice isn’t important in itself, but the discourse algorithm that’s too cowardly to even think about dating advice is thereby too constrained to do serious thinking about the things that are important.
I’m too confused/unsure right now to respond to this, but I want to assure you that it’s not because I’m ignoring your comment.