It’s worth noting that pretty much every consequentialist since J.S. Mill has stressed the importance of inculcating generally-reliable dispositions / character traits, rather than attempting to explicitly make utility calculations in everyday life. It’s certainly a good recommendation, but it seems misleading to characterize this as in any way at odds with the consequentialist tradition.
It might be worth presenting Will with a dilemma that drives a wedge between a particular virtue and some consequence he cares about. E.g. suppose that the only way to fund saving the world is by becoming a gangster and inculcating the vices of revenge, mercilessness and the love of money in yourself.
This is a useful dilemma. What are some of the possible motivators for refusing to become a gangster?
You don’t really care about saving the world; the only consequence that actually matters to you is being a nice person.
You don’t trust your conclusion that Operation: Gangsta will save the world; you place so much heuristic faith in virtues that you actually expect any calculation that outputs a recommendation to become a gangster to be fatally flawed.
You don’t trust your values not to evolve away from saving the world if you become a gangster; it might be impossible or extremely risky to save the world by thugging out because being a thug makes you care less about saving the world; you might have a career of evil and then just spend the proceeds on casinos, hitmen, and mansions.
The second and the third are the most convincing reasons, but EY already explained how those follow from using deontology rather than virtue ethics as a heuristic for handling the fact that you are a consequentialist running on corrupt hardware. This calls into question how much insight Will_Newsome has provided with this article.
His point in that article, if you’ll recall, is that deontology is consequentialism, just one meta-level up and with the knowledge that your hardware distorts your moral cognition in predictable ways.
The problem is becoming a gangster strikes me, just on pragmatic grounds, as a very bad way to fund saving the world so all these motivations are hard to evaluate.
Sure, but try to cope with the dilemma as best you can. If you can think of a better example, great! If not, try to imagine a situation where being a gangster would be pragmatic. Maybe you’re the godfather’s favorite child, recently returned from the military and otherwise unskilled. Maybe you live in a dome on a colony planet that is essentially one big corrupt city, and ordinary entrepreneurship doesn’t pay off properly. Maybe you’re a member of a despised or even outlawed ethnicity in medieval times, and no one will sit still to listen to your brilliant ideas about how to build better water mills and eradicate plague unless you first establish yourself as a powerful and wealthy fringe figure.
In general, when trying to evaluate an argument that you’re initially inclined to disagree with, you should try to place your self in The Least Convenient Possible World for refuting that argument. That way, if you still manage to refute the argument, you’ll at least have learned something. If you stop thinking when the ordinary world doesn’t seem to validate a hypothesis that you didn’t believe in to begin with, you don’t really learn anything.
There isn’t much of a dilemma if you assume there are some states worse than death. Eternal torture is less preferable to non-existence. A malicious world of pain and vice is less preferable than a non-existent world. By becoming a malicious, vice-filled person you are moving the world in the direction of being worse than non-existent, and thus are defeating your stated goal. You are doing more to destroy the world than to save it.
The least convenient possible world is one with superhumanly intelligent AIs that can have complete confidence in their source code, and predict with complete confidence that these means (thuggishness) will in fact lead to those ends (saving the world).
However in that world the world has already been saved (or destroyed) and so this is not relevant. In any relevant world the actor who is resorting to thuggishness to save the world is a human running on hostile hardware, and would be stupid not to take that into consideration.
I consider the “P” in LCPW to be important. If the agents in question are post-human then it’s too late to worry about saving the world. If you still have to save the world, then standard human failure modes do apply.
I would do what sounded like the consequentialist thing to do and become a gangster. Not only would I be saving the world but I’d also be pretty badass if I was doing it right. Rationalists should win when possible and what not. Consequentialism-ism is the key Virtue.
Isn’t this just Indirect Consequentialism?
It’s worth noting that pretty much every consequentialist since J.S. Mill has stressed the importance of inculcating generally-reliable dispositions / character traits, rather than attempting to explicitly make utility calculations in everyday life. It’s certainly a good recommendation, but it seems misleading to characterize this as in any way at odds with the consequentialist tradition.
It might be worth presenting Will with a dilemma that drives a wedge between a particular virtue and some consequence he cares about. E.g. suppose that the only way to fund saving the world is by becoming a gangster and inculcating the vices of revenge, mercilessness and the love of money in yourself.
This is a useful dilemma. What are some of the possible motivators for refusing to become a gangster?
You don’t really care about saving the world; the only consequence that actually matters to you is being a nice person.
You don’t trust your conclusion that Operation: Gangsta will save the world; you place so much heuristic faith in virtues that you actually expect any calculation that outputs a recommendation to become a gangster to be fatally flawed.
You don’t trust your values not to evolve away from saving the world if you become a gangster; it might be impossible or extremely risky to save the world by thugging out because being a thug makes you care less about saving the world; you might have a career of evil and then just spend the proceeds on casinos, hitmen, and mansions.
The second and the third are the most convincing reasons, but EY already explained how those follow from using deontology rather than virtue ethics as a heuristic for handling the fact that you are a consequentialist running on corrupt hardware. This calls into question how much insight Will_Newsome has provided with this article.
His point in that article, if you’ll recall, is that deontology is consequentialism, just one meta-level up and with the knowledge that your hardware distorts your moral cognition in predictable ways.
The problem is becoming a gangster strikes me, just on pragmatic grounds, as a very bad way to fund saving the world so all these motivations are hard to evaluate.
Sure, but try to cope with the dilemma as best you can. If you can think of a better example, great! If not, try to imagine a situation where being a gangster would be pragmatic. Maybe you’re the godfather’s favorite child, recently returned from the military and otherwise unskilled. Maybe you live in a dome on a colony planet that is essentially one big corrupt city, and ordinary entrepreneurship doesn’t pay off properly. Maybe you’re a member of a despised or even outlawed ethnicity in medieval times, and no one will sit still to listen to your brilliant ideas about how to build better water mills and eradicate plague unless you first establish yourself as a powerful and wealthy fringe figure.
In general, when trying to evaluate an argument that you’re initially inclined to disagree with, you should try to place your self in The Least Convenient Possible World for refuting that argument. That way, if you still manage to refute the argument, you’ll at least have learned something. If you stop thinking when the ordinary world doesn’t seem to validate a hypothesis that you didn’t believe in to begin with, you don’t really learn anything.
There isn’t much of a dilemma if you assume there are some states worse than death. Eternal torture is less preferable to non-existence. A malicious world of pain and vice is less preferable than a non-existent world. By becoming a malicious, vice-filled person you are moving the world in the direction of being worse than non-existent, and thus are defeating your stated goal. You are doing more to destroy the world than to save it.
Consider the least convenient possible world
The least convenient possible world is one with superhumanly intelligent AIs that can have complete confidence in their source code, and predict with complete confidence that these means (thuggishness) will in fact lead to those ends (saving the world).
However in that world the world has already been saved (or destroyed) and so this is not relevant. In any relevant world the actor who is resorting to thuggishness to save the world is a human running on hostile hardware, and would be stupid not to take that into consideration.
Then it isn’t the LCPW
I consider the “P” in LCPW to be important. If the agents in question are post-human then it’s too late to worry about saving the world. If you still have to save the world, then standard human failure modes do apply.
I would do what sounded like the consequentialist thing to do and become a gangster. Not only would I be saving the world but I’d also be pretty badass if I was doing it right. Rationalists should win when possible and what not. Consequentialism-ism is the key Virtue.
Being badass is a close second.