I recently had a very interesting conversation about master morality and slave morality, inspired by the recent AstralCodexTen posts.
The position I eventually landed on was:
Empirically, it seems like the world is not improved the most by people whose primary motivation is helping others, but rather by people whose primary motivation is achieving something amazing. If this is true, that’s a strong argument against slave morality.
The defensibility of morality as the pursuit of greatness depends on how sophisticated our cultural conceptions of greatness are. Unfortunately we may be in a vicious spiral where we’re too entrenched in slave morality to admire great people, which makes it harder to become great, which gives us fewer people to admire, which… By contrast, I picture past generations as being in a constant aspirational dialogue about what counts as greatness—e.g. defining concepts like honor, Aristotelean magnanimity (“greatness of soul”), etc.
I think of master morality as a variant of virtue ethics which is particularly well-adapted to domains which have heavy positive tails—entrepreneurship, for example. However, in domains which have heavy negative tails, the pursuit of greatness can easily lead to disaster. In those domains, the appropriate variant of virtue ethics is probably more like Buddhism: searching for equanimity or “green”. In domains which have both (e.g. the world as a whole) the closest thing I’ve found is the pursuit of integrity and attunement to oneself. So maybe that’s the thing that we need a cultural shift towards understanding better.
If the following correlations are true, then the opposite may be true (slave morality being better for improving the world through history):
Improving the world being strongly correlated with economic growth (this is probably less true when X-risk are significant)
Economic growth being strongly correlated with Entrepreneurship incentives (property rights, autonomy, fairness, meritocracy, low rents)
Master morality being strongly correlated with acquiring power and thus decreasing the power of others and decreasing their entrepreneurship incentives
That all sounds very plausible. But isn’t this all mostly relevant before AGI is a possibility? That would be a heavy negative tail risk, in which people motivated to “do great things” are quite prone to get us all killed. Should we survive that risk, progress probably mostly won’t be driven by humans, so humans doing great things will barely count. If humans are actually still in charge when we hit ASI, it seems like doing great things with them will probably still have large tail risks (inter-ASI wars).
Right? Or do you see it differently?
It’s a fascinating empirical claim that sounds right now that I hear it.
AGI is heavy-tailed in both directions I think. I don’t think we get utopias by default even without misalignment, since governance of AGI is so complicated.
Re: your point #2, there is another potential spiral where abstract concepts of “greatness” are increasingly defined in a hostile and negative way by partisans of slave morality. This might make it harder to have that “aspirational dialogue about what counts as greatness”, as it gets increasingly difficult for ordinary people to even conceptualize a good version of greatness worth aspiring to. (“Why would I want to become an entrepeneur and found a company? Wouldn’t that make me an evil big-corporation CEO, which has a whiff of the same flavor as stories about the violent, insatiable conquistador villans of the 1500s?”)
Of course, there are also downsides when culture paints a too-rosy picture of greatness—once upon a time, conquistators were in fact considered admirable!
I recently had a very interesting conversation about master morality and slave morality, inspired by the recent AstralCodexTen posts.
The position I eventually landed on was:
Empirically, it seems like the world is not improved the most by people whose primary motivation is helping others, but rather by people whose primary motivation is achieving something amazing. If this is true, that’s a strong argument against slave morality.
The defensibility of morality as the pursuit of greatness depends on how sophisticated our cultural conceptions of greatness are. Unfortunately we may be in a vicious spiral where we’re too entrenched in slave morality to admire great people, which makes it harder to become great, which gives us fewer people to admire, which… By contrast, I picture past generations as being in a constant aspirational dialogue about what counts as greatness—e.g. defining concepts like honor, Aristotelean magnanimity (“greatness of soul”), etc.
I think of master morality as a variant of virtue ethics which is particularly well-adapted to domains which have heavy positive tails—entrepreneurship, for example. However, in domains which have heavy negative tails, the pursuit of greatness can easily lead to disaster. In those domains, the appropriate variant of virtue ethics is probably more like Buddhism: searching for equanimity or “green”. In domains which have both (e.g. the world as a whole) the closest thing I’ve found is the pursuit of integrity and attunement to oneself. So maybe that’s the thing that we need a cultural shift towards understanding better.
If the following correlations are true, then the opposite may be true (slave morality being better for improving the world through history):
Improving the world being strongly correlated with economic growth (this is probably less true when X-risk are significant)
Economic growth being strongly correlated with Entrepreneurship incentives (property rights, autonomy, fairness, meritocracy, low rents)
Master morality being strongly correlated with acquiring power and thus decreasing the power of others and decreasing their entrepreneurship incentives
That all sounds very plausible. But isn’t this all mostly relevant before AGI is a possibility? That would be a heavy negative tail risk, in which people motivated to “do great things” are quite prone to get us all killed. Should we survive that risk, progress probably mostly won’t be driven by humans, so humans doing great things will barely count. If humans are actually still in charge when we hit ASI, it seems like doing great things with them will probably still have large tail risks (inter-ASI wars).
Right? Or do you see it differently?
It’s a fascinating empirical claim that sounds right now that I hear it.
AGI is heavy-tailed in both directions I think. I don’t think we get utopias by default even without misalignment, since governance of AGI is so complicated.
Re: your point #2, there is another potential spiral where abstract concepts of “greatness” are increasingly defined in a hostile and negative way by partisans of slave morality. This might make it harder to have that “aspirational dialogue about what counts as greatness”, as it gets increasingly difficult for ordinary people to even conceptualize a good version of greatness worth aspiring to. (“Why would I want to become an entrepeneur and found a company? Wouldn’t that make me an evil big-corporation CEO, which has a whiff of the same flavor as stories about the violent, insatiable conquistador villans of the 1500s?”)
Of course, there are also downsides when culture paints a too-rosy picture of greatness—once upon a time, conquistators were in fact considered admirable!