Especially odd from a person who believes in the probable possibility of humanly irresistible bad arguments as a reason for not AI boxing. If there are minds that we can’t let exist because they would make bad arguments that we would find persuasive this seems terribly close, from an aggregative utilitarian standpoint, to killing them.
Fine, let me rephrase: in the human art of rationality there’s a flat law against meeting arguments with violence, anywhere in the human world. In the superintelligent domain, as you say, violence is not an ontological category and there is no firm line between persuading someone with a bad argument and reprogramming their brain with nanomachines. In our world there is a firm line, however.
Let me put it this way: If you can invent a bullet that, regardless of how it is fired, or who fires it, only hits people who emit untrue statements, then you can try to use bullets as part of a Bayesian analysis. Until then, you really ought to consider the possibility of the other guy shooting back, no matter how right you are or how wrong they are, and ask whether you want to start down that road.
If the other guy shoots first, of course, that’s a whole different story that has nothing to do with free speech.
Especially odd from a person who believes in the probable possibility of humanly irresistible bad arguments as a reason for not AI boxing. If there are minds that we can’t let exist because they would make bad arguments that we would find persuasive this seems terribly close, from an aggregative utilitarian standpoint, to killing them.
Fine, let me rephrase: in the human art of rationality there’s a flat law against meeting arguments with violence, anywhere in the human world. In the superintelligent domain, as you say, violence is not an ontological category and there is no firm line between persuading someone with a bad argument and reprogramming their brain with nanomachines. In our world there is a firm line, however.
Let me put it this way: If you can invent a bullet that, regardless of how it is fired, or who fires it, only hits people who emit untrue statements, then you can try to use bullets as part of a Bayesian analysis. Until then, you really ought to consider the possibility of the other guy shooting back, no matter how right you are or how wrong they are, and ask whether you want to start down that road.
If the other guy shoots first, of course, that’s a whole different story that has nothing to do with free speech.