It’s relatively safe to be around an Eliezer Yudkowsky while the world is ending, because he’s not going to do anything extreme and unethical unless it would really actually save the world in real life, and there are no extreme unethical actions that would really actually save the world the way these things play out in real life, and he knows that.
This reads like a call to violence for anyone who is consequentialist.
It’s saying that either you make a rogue AI “that kills lots of people and is barely contained”, or unfriendly AGI happens and everyone dies. I think the conclusion is meant to be “and therefore you shouldn’t be consequentalist” and not “and therefore you should make a rogue AI”, but it’s not entirely clear?
And I don’t think the “either” statement holds because it’s ignoring other options, and ignoring the high chance the rogue AI isn’t contained. So you end up with “a poor argument, possibly in favor of making a rogue AI”, which seems optimized to get downvotes from this community.
No it doesn’t mean you shouldn’t be consequentialist. I’m challenging people to point out the flaw in the argument.
If you find the argument persuasive, and think the ability to “push the fat man” (without getting LW tangled up in the investigation) might be a resource worth keeping, the correct action to take is not to comment, and perhaps to downvote.
There’s a difference between abstract philosophical discussion and calls to action and I think most people feel that it leans too far towards the later.
Not sure why people silently downvote this, it’s a common pitfall in consequentialism and is discussed here often, most recently by Eliezer in https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy in Q4.
This reads like a call to violence for anyone who is consequentialist.
It’s saying that either you make a rogue AI “that kills lots of people and is barely contained”, or unfriendly AGI happens and everyone dies. I think the conclusion is meant to be “and therefore you shouldn’t be consequentalist” and not “and therefore you should make a rogue AI”, but it’s not entirely clear?
And I don’t think the “either” statement holds because it’s ignoring other options, and ignoring the high chance the rogue AI isn’t contained. So you end up with “a poor argument, possibly in favor of making a rogue AI”, which seems optimized to get downvotes from this community.
No it doesn’t mean you shouldn’t be consequentialist. I’m challenging people to point out the flaw in the argument.
If you find the argument persuasive, and think the ability to “push the fat man” (without getting LW tangled up in the investigation) might be a resource worth keeping, the correct action to take is not to comment, and perhaps to downvote.
There’s a difference between abstract philosophical discussion and calls to action and I think most people feel that it leans too far towards the later.