If you predictably have no ethics when the world is at stake, people (including your allies!) who know this won’t trust you when you think the world is at stake. That could also get everybody killed.
(Yes, this isn’t going to make the comfortably ethical option always correct, but it’s a really important consideration.)
Note to any readers: This subthread is discussing the general and unambiguously universal claim conveyed by a particular Eliezer quote. There are no connotations for the AGI prevention fiasco beyond the rejection of that particular soldier as it is used here or anywhere else.
If you predictably have no ethics when the world is at stake, people who know this won’t trust you when you think the world is at stake. That could also get everybody killed.
I appreciate ethics. I’ve made multiple references to the ‘ethical injunctions’ post in this thread and tend to do so often elsewhere—I rate it as the second most valuable post on the site, after ‘subjectively objective’.
Where people often seem to get confused is in conflating ‘having ethics’ with being nice. There are situations where not shooting at people is an ethical violation. (Think neglecting duties when there is risk involved.) Pacifism is not intrinsically ethically privileged.
The problem with the rule:
“Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever.”
… is not that it is advocating doing the Right Thing even in extreme scenarios. The problem is that it is advocating doing the Wrong Thing. It is unethical and people knowing that you will follow this particular rule is dangerous and generally undesirable.
Bullets are an appropriate response in all sorts of situations where power is involved. And arguments are power. They don’t say “the pen is mightier than the sword” for nothing.
Let’s see… five seconds thought… consider a country in which one ethnicity has enslaved another. Among the dominant race there is a conservative public figure who is a powerful orator with a despicable agenda. Say… he advocates the killing of slaves who are unable to work, the castration of all the males and the use of the females as sex slaves. Not entirely implausible as far as atrocities go. The arguments he uses are either bad or Bad yet he is rapidly gaining support.
What is the Right Thing To Do? It certainly isn’t arguing with him—that’ll just end with you being ‘made an example’. The bad arguments are an application of power and must be treated as such. The ethical action to take is to assassinate him if at all possible.
“Never. Never ever never for ever.” is just blatantly and obviously wrong. There is no excuse for Eliezer to make that kind of irresponsible claim—he knows people are going to get confused by it and quote it to proliferate the error.
I agree with everything in this comment (subject to the disclaimer in the first paragraph, and possibly excepting the strength of the claim in the very last sentence), and appreciate the clarification.
(I suspect we still disagree about how to apply ethics to AI risks, but I don’t feel like having that argument right now.)
I agree with everything in this comment (subject to the disclaimer in the first paragraph, and possibly excepting the strength of the claim in the very last sentence), and appreciate the clarification.
I’m not entirely sure I agree with the strength of the claim in my last sentence either. It does seem rather exaggerated. :)
If you predictably have no ethics when the world is at stake, people (including your allies!) who know this won’t trust you when you think the world is at stake. That could also get everybody killed.
(Yes, this isn’t going to make the comfortably ethical option always correct, but it’s a really important consideration.)
Note to any readers: This subthread is discussing the general and unambiguously universal claim conveyed by a particular Eliezer quote. There are no connotations for the AGI prevention fiasco beyond the rejection of that particular soldier as it is used here or anywhere else.
I appreciate ethics. I’ve made multiple references to the ‘ethical injunctions’ post in this thread and tend to do so often elsewhere—I rate it as the second most valuable post on the site, after ‘subjectively objective’.
Where people often seem to get confused is in conflating ‘having ethics’ with being nice. There are situations where not shooting at people is an ethical violation. (Think neglecting duties when there is risk involved.) Pacifism is not intrinsically ethically privileged.
The problem with the rule:
… is not that it is advocating doing the Right Thing even in extreme scenarios. The problem is that it is advocating doing the Wrong Thing. It is unethical and people knowing that you will follow this particular rule is dangerous and generally undesirable.
Bullets are an appropriate response in all sorts of situations where power is involved. And arguments are power. They don’t say “the pen is mightier than the sword” for nothing.
Let’s see… five seconds thought… consider a country in which one ethnicity has enslaved another. Among the dominant race there is a conservative public figure who is a powerful orator with a despicable agenda. Say… he advocates the killing of slaves who are unable to work, the castration of all the males and the use of the females as sex slaves. Not entirely implausible as far as atrocities go. The arguments he uses are either bad or Bad yet he is rapidly gaining support.
What is the Right Thing To Do? It certainly isn’t arguing with him—that’ll just end with you being ‘made an example’. The bad arguments are an application of power and must be treated as such. The ethical action to take is to assassinate him if at all possible.
“Never. Never ever never for ever.” is just blatantly and obviously wrong. There is no excuse for Eliezer to make that kind of irresponsible claim—he knows people are going to get confused by it and quote it to proliferate the error.
I agree with everything in this comment (subject to the disclaimer in the first paragraph, and possibly excepting the strength of the claim in the very last sentence), and appreciate the clarification.
(I suspect we still disagree about how to apply ethics to AI risks, but I don’t feel like having that argument right now.)
I’m not entirely sure I agree with the strength of the claim in my last sentence either. It does seem rather exaggerated. :)