Your hypothetical is a good one. And you are correct: I don’t think you are dishonest if you are sincerely trying to build or sell a perpetual motion machine. You’re still wrong, and even silly, but not dishonest. I need a word to refer to conscious knowing deception, and “dishonest” is the most useful word for the purpose. I can’t let you use it for some other purpose; I need it where it is.
The argument is not applicable to all criminal conduct. In American criminal law, we pay a lot of attention to the criminal’s state of mind. Having the appropriate criminal state of mind is an essential element of many crimes. It’s not premeditated murder if you didn’t expect the victim to die. It’s not burglary if you thought you lived there. It’s utterly routine—and I think morally necessary—to ask juries “what what the defendant’s intention or state of mind”. There is a huge moral and practical difference between a conscious and an unconscious criminal. Education much more easily cures the latter, while punishment is comparatively ineffective. For the conscious criminal, the two are reversed: punishment is often appropriate, whereas education has limited benefits.
I don’t believe I am giving liars a get-out-of-jail-free card. Ignorance isn’t an unlimited defense, and I don’t think it is so easy to convince an outside observer (or a jury) that you’re ignorant in cases where knowledge would be expected. If you really truly are in a state of pathological ignorance and it’s a danger to others, we might lock you up as a precaution, but you wouldn’t be criminally liable.
As to scientific ethics: All human processes have a non-zero chance of errors. The scientists I know are pretty cynical about the process. They are fighting to get papers published and they know it. But they do play by the rules—they won’t falsify data or mislead the reader. And they don’t want to publish something if they’ll be caught-out having gotten something badly wrong. As a result, the process works pretty much OK. It moves forward on average.
I think SIAI is playing by similar rules. I’ve never seen them caught lying about some fact that can be reliably measured. I’ve never seen evidence they are consciously deceiving their audience. If they submitted their stuff to a scientific publication and I were the reviewer, I might try to reject the paper, but I wouldn’t think of trying to have them disciplined for submitting it. In science, we don’t accuse people of misconduct for being wrong, or pigheaded, or even for being overly biased by your self-interest. Is there some more serious charge you can bring against SIAI? How are they worse than any scientist fighting for a grant based on shakey evidence?
I think you have somewhat simplistic idea of justice… there is the “voluntary manslaughter”, there’s the “gross negligence”, and so on. I think SIAI falls under the latter category.
How are they worse than any scientist fighting for a grant based on shakey evidence?
Quantitatively, and by a huge amount. edit: Also, the of beliefs, that they claim to hold, when hold honestly, result in massive loss of resources such as moving to cheaper country to save money, etc etc. I dread to imagine what would happen to me if I honestly were this mistaken about AI. The erroneous beliefs damage you.
The lying is about having two sets of incompatible beliefs, that are picked between based on convenience.
edit: To clarify, the justice is not about the beliefs held by the person. It is more about the process that the person is using to arriving at the actions (see the whole ‘reasonable person’ stuff). If A wants to kill B and A edits A’s beliefs to be “B is going to kill me”, and then acts in self defense and kills B, if the justice system had a log of A’s processing, the A would go for premeditated murder. Even though at the time of the murder A is honestly acting in self defense. (Furthermore, lacking a cross neurophysiological anomaly, it is a fact of reality that the justice can only act based on inputs and outputs of agents)
Your hypothetical is a good one. And you are correct: I don’t think you are dishonest if you are sincerely trying to build or sell a perpetual motion machine. You’re still wrong, and even silly, but not dishonest. I need a word to refer to conscious knowing deception, and “dishonest” is the most useful word for the purpose. I can’t let you use it for some other purpose; I need it where it is.
The argument is not applicable to all criminal conduct. In American criminal law, we pay a lot of attention to the criminal’s state of mind. Having the appropriate criminal state of mind is an essential element of many crimes. It’s not premeditated murder if you didn’t expect the victim to die. It’s not burglary if you thought you lived there. It’s utterly routine—and I think morally necessary—to ask juries “what what the defendant’s intention or state of mind”. There is a huge moral and practical difference between a conscious and an unconscious criminal. Education much more easily cures the latter, while punishment is comparatively ineffective. For the conscious criminal, the two are reversed: punishment is often appropriate, whereas education has limited benefits.
I don’t believe I am giving liars a get-out-of-jail-free card. Ignorance isn’t an unlimited defense, and I don’t think it is so easy to convince an outside observer (or a jury) that you’re ignorant in cases where knowledge would be expected. If you really truly are in a state of pathological ignorance and it’s a danger to others, we might lock you up as a precaution, but you wouldn’t be criminally liable.
As to scientific ethics: All human processes have a non-zero chance of errors. The scientists I know are pretty cynical about the process. They are fighting to get papers published and they know it. But they do play by the rules—they won’t falsify data or mislead the reader. And they don’t want to publish something if they’ll be caught-out having gotten something badly wrong. As a result, the process works pretty much OK. It moves forward on average.
I think SIAI is playing by similar rules. I’ve never seen them caught lying about some fact that can be reliably measured. I’ve never seen evidence they are consciously deceiving their audience. If they submitted their stuff to a scientific publication and I were the reviewer, I might try to reject the paper, but I wouldn’t think of trying to have them disciplined for submitting it. In science, we don’t accuse people of misconduct for being wrong, or pigheaded, or even for being overly biased by your self-interest. Is there some more serious charge you can bring against SIAI? How are they worse than any scientist fighting for a grant based on shakey evidence?
I think you have somewhat simplistic idea of justice… there is the “voluntary manslaughter”, there’s the “gross negligence”, and so on. I think SIAI falls under the latter category.
Quantitatively, and by a huge amount. edit: Also, the of beliefs, that they claim to hold, when hold honestly, result in massive loss of resources such as moving to cheaper country to save money, etc etc. I dread to imagine what would happen to me if I honestly were this mistaken about AI. The erroneous beliefs damage you.
The lying is about having two sets of incompatible beliefs, that are picked between based on convenience.
edit: To clarify, the justice is not about the beliefs held by the person. It is more about the process that the person is using to arriving at the actions (see the whole ‘reasonable person’ stuff). If A wants to kill B and A edits A’s beliefs to be “B is going to kill me”, and then acts in self defense and kills B, if the justice system had a log of A’s processing, the A would go for premeditated murder. Even though at the time of the murder A is honestly acting in self defense. (Furthermore, lacking a cross neurophysiological anomaly, it is a fact of reality that the justice can only act based on inputs and outputs of agents)