Err… did that post end up dying in a free speech happy death spiral?
Especially odd from a person who believes in the probable possibility of humanly irresistible bad arguments as a reason for not AI boxing. If there are minds that we can’t let exist because they would make bad arguments that we would find persuasive this seems terribly close, from an aggregative utilitarian standpoint, to killing them.
I’m not an expert in the Rwandan genocide, but it’s my impression that to a substantial extent the people behind it basically just made arguments (bad ones, of a primarily ad-hominem form like “Tutsis are like cockroaches”) for killing them and people who listened to those arguments on the radio went along with it. At least with the benefit of hindsight I am reluctant to say that the people promoting that genocide should have been stopped forcibly. Similarly, it’s my impression that Charles Manson didn’t personally kill anyone. He merely told his followers ridiculous stories of what the likely results of their killing certain people would be.
It would be nice if, as Socrates claimed, a bad argument cannot defeat a good one, but if that was true we wouldn’t need to overcome bias. With respect to our own biases, hopefully careful thought and study of psychology is the only tool we will ever need to overcome them, but with respect to the biases of others it would be terribly biased to never consider the possibility that other tools are necessary. We can find good heuristics, like “don’t violently suppress anyone who isn’t actively promoting violence”, but sadly violence isn’t a basic ontological category, so we can’t cleanly divide the world into violent and non-violent actions, no into statements that promote or don’t promote some conclusion (in the context of what goal system?).
Err… did that post end up dying in a free speech happy death spiral?
Especially odd from a person who believes in the probable possibility of humanly irresistible bad arguments as a reason for not AI boxing. If there are minds that we can’t let exist because they would make bad arguments that we would find persuasive this seems terribly close, from an aggregative utilitarian standpoint, to killing them.
I’m not an expert in the Rwandan genocide, but it’s my impression that to a substantial extent the people behind it basically just made arguments (bad ones, of a primarily ad-hominem form like “Tutsis are like cockroaches”) for killing them and people who listened to those arguments on the radio went along with it. At least with the benefit of hindsight I am reluctant to say that the people promoting that genocide should have been stopped forcibly. Similarly, it’s my impression that Charles Manson didn’t personally kill anyone. He merely told his followers ridiculous stories of what the likely results of their killing certain people would be.
It would be nice if, as Socrates claimed, a bad argument cannot defeat a good one, but if that was true we wouldn’t need to overcome bias. With respect to our own biases, hopefully careful thought and study of psychology is the only tool we will ever need to overcome them, but with respect to the biases of others it would be terribly biased to never consider the possibility that other tools are necessary. We can find good heuristics, like “don’t violently suppress anyone who isn’t actively promoting violence”, but sadly violence isn’t a basic ontological category, so we can’t cleanly divide the world into violent and non-violent actions, no into statements that promote or don’t promote some conclusion (in the context of what goal system?).