Roko, you make a good point that it can be quite murky just what realism and anti-realism mean (in ethics or in anything else). However, I don’t agree with what you write after that. Your Strong Moral Realism is a claim that is outside the domain of philosophy, as it is an empirical claim in the domain of exo-biology or exo-sociology or something. No matter what the truth of a meta-ethical claim, smart entities might refuse to believe it (the same goes for other philosophical claims or mathematical claims).
Pick your favourite philosophical claim. I’m sure there are very smart possible entities that don’t believe this and very smart ones that do. There are probably also very smart entities without the concepts needed to consider it.
I understand why you introduced Strong Moral Realism: you want to be able to see why the truth of realism would matter and so you came up with truth conditions. However, reducing a philosophical claim to an empirical one never quite captures it.
For what its worth, I think that the empirical claim Strong Moral Realism is false, but I wouldn’t be surprised if there was considerable agreement among radically different entities on how to transform the world.
Pick your favourite philosophical claim. I’m sure there are very smart possible entities that don’t believe this and very smart ones that do
If there’s a philosophical claim that intelligent agents across the universe wouldn’t display massive agreement on, then I don’t really think it is worth its salt. I think that this principle can be used to eliminate a lot of nonsense from philosophy.
Which of anti-realism or weak realism is true seems to be a question we can eliminate. Whether strong realism is true or not seems substantive, because it matters to our policy which is true.
However, reducing a philosophical claim to an empirical one never quite captures it.
There are clearly some examples where there can be interesting things to say that aren’t really empirical, e.g. decision theory, mystery of subjective experience. But I think that this isn’t one of them.
Suffice it to say I can’t think of anything that makes the debate between weak realism and antirealism at all interesting or worthy of attention. Certainly, Friendly AI theorists ought not care about the difference, because the empirical claims about an AI system will do are identical. Once the illusions and fallacies surrounding rationalist moral psychology has been debunked, proponents of other AI motivation methods than FAI also ought not to care about the weak realism vs. anti-realism pseudo-question
I’m having trouble reconciling this with the beginning of your first comment:
These views go together well: if value is not fundamental, but dependent on characteristics of humans, then it can derive complexity from this and not suffer due to Occam’s Razor.
Roko, you make a good point that it can be quite murky just what realism and anti-realism mean (in ethics or in anything else). However, I don’t agree with what you write after that. Your Strong Moral Realism is a claim that is outside the domain of philosophy, as it is an empirical claim in the domain of exo-biology or exo-sociology or something. No matter what the truth of a meta-ethical claim, smart entities might refuse to believe it (the same goes for other philosophical claims or mathematical claims).
Pick your favourite philosophical claim. I’m sure there are very smart possible entities that don’t believe this and very smart ones that do. There are probably also very smart entities without the concepts needed to consider it.
I understand why you introduced Strong Moral Realism: you want to be able to see why the truth of realism would matter and so you came up with truth conditions. However, reducing a philosophical claim to an empirical one never quite captures it.
For what its worth, I think that the empirical claim Strong Moral Realism is false, but I wouldn’t be surprised if there was considerable agreement among radically different entities on how to transform the world.
If there’s a philosophical claim that intelligent agents across the universe wouldn’t display massive agreement on, then I don’t really think it is worth its salt. I think that this principle can be used to eliminate a lot of nonsense from philosophy.
Which of anti-realism or weak realism is true seems to be a question we can eliminate. Whether strong realism is true or not seems substantive, because it matters to our policy which is true.
There are clearly some examples where there can be interesting things to say that aren’t really empirical, e.g. decision theory, mystery of subjective experience. But I think that this isn’t one of them.
Suffice it to say I can’t think of anything that makes the debate between weak realism and antirealism at all interesting or worthy of attention. Certainly, Friendly AI theorists ought not care about the difference, because the empirical claims about an AI system will do are identical. Once the illusions and fallacies surrounding rationalist moral psychology has been debunked, proponents of other AI motivation methods than FAI also ought not to care about the weak realism vs. anti-realism pseudo-question
I’m having trouble reconciling this with the beginning of your first comment: