For most people it doesn’t really matter when they trade off better PR for not following higher values such as respecting free speech.
If you want to design a FAI that values human values, it matters. You should practice to follow human values yourself. You should take situation like this opportunities for deliberate practice in making the kind of moral decisions that an FAI has to make.
Power corrupts. It’s easy to censor criticism of your own decisions. You should use those opportuniities to practice being friendly instead of being corrupted.
In the past there was a case of censorship that lead to bad press for LessWrong. Given that past performance, why should we believe that increasing censorship will be good for PR?
In other words, the form of this discussion is not ‘Do you like this?’ - you probably have a different cost function from people who are held responsible for how LW looks as a whole
The first instinct of an FAI shouldn’t be: “Hey, the cost function of those humans is probably wrong, let’s use a different cost function.”
A few days ago someone wrote a post about how rationalists should make group decisions. I did argue that his proposal was unlikely to be effectively implementable.
A decision about how the perfect censorship policy for LessWrong could look like could be made via Delphi.
For most people it doesn’t really matter when they trade off better PR for not following higher values such as respecting free speech.
If you want to design a FAI that values human values, it matters. You should practice to follow human values yourself. You should take situation like this opportunities for deliberate practice in making the kind of moral decisions that an FAI has to make.
Power corrupts. It’s easy to censor criticism of your own decisions. You should use those opportuniities to practice being friendly instead of being corrupted.
In the past there was a case of censorship that lead to bad press for LessWrong. Given that past performance, why should we believe that increasing censorship will be good for PR?
The first instinct of an FAI shouldn’t be: “Hey, the cost function of those humans is probably wrong, let’s use a different cost function.”
A few days ago someone wrote a post about how rationalists should make group decisions. I did argue that his proposal was unlikely to be effectively implementable.
A decision about how the perfect censorship policy for LessWrong could look like could be made via Delphi.