The Yudkowskian response is to point out that when cognitivists use the term ‘good’, their intuitive notion of ‘good’ is captured by a massive logical function that can’t be expressed in simple statements
This is the weakest part of the argument. Why should anybody believe that there is a super complicated function that determines what is ‘good’? What are the alternative hypotheses?
I can think of a much simpler hypothesis that explains all of the relevant facts. Our brains come equipped with a simple function that maps “is” statements to “ought” statements. Thus, we can reason about “ought” statements just like we do with “is” statements.
The special thing about this function is that there is nothing special about it at all. It is absolutely trivial. Any “ought” statement can potentially be inferred from any “is” statement. Therefore, “ought” statements can never be conditioned by evidence. This explains not only why there is lots of disagreement among people about what is “good” and that our beliefs about what constitutes “good” will be very complicated, but also that there will be no way to resolve these disagreements.
The situation is more complex than a set of objective moral truths that everyone agrees on, but it is also more complex than complete divergence. There is basis for convergence on the wrongness of murder and theft and some other things. Complete divergence would would mean itwould be irrational to even try to find common ground , and enshrine it in law.
This is the weakest part of the argument. Why should anybody believe that there is a super complicated function that determines what is ‘good’? … Our brains come equipped with a simple function that maps “is” statements to “ought” statements. Thus, we can reason about “ought” statements just like we do with “is” statements
I think the claim isn’t that there is a super-complicated function that determines what is ‘good’, but that the mapping from ‘is’ statements to ‘ought’ statements in the human brain is extremely complicated. If we claim that what is ‘good’ is what our brain considers is ‘good’, though, we merely encapsulate this complexity in a convenient black box.
That’s not to say that it’s not a solution, though: have you looked into desire utilitarianism? What you’re proposing here is really similar to (as I understand it) that school of moral philosophy claims. If you have time, Fyfe’s A Better Place is a good introduction.
This is the weakest part of the argument. Why should anybody believe that there is a super complicated function that determines what is ‘good’? What are the alternative hypotheses?
I can think of a much simpler hypothesis that explains all of the relevant facts. Our brains come equipped with a simple function that maps “is” statements to “ought” statements. Thus, we can reason about “ought” statements just like we do with “is” statements.
The special thing about this function is that there is nothing special about it at all. It is absolutely trivial. Any “ought” statement can potentially be inferred from any “is” statement. Therefore, “ought” statements can never be conditioned by evidence. This explains not only why there is lots of disagreement among people about what is “good” and that our beliefs about what constitutes “good” will be very complicated, but also that there will be no way to resolve these disagreements.
The situation is more complex than a set of objective moral truths that everyone agrees on, but it is also more complex than complete divergence. There is basis for convergence on the wrongness of murder and theft and some other things. Complete divergence would would mean itwould be irrational to even try to find common ground , and enshrine it in law.
I think the claim isn’t that there is a super-complicated function that determines what is ‘good’, but that the mapping from ‘is’ statements to ‘ought’ statements in the human brain is extremely complicated. If we claim that what is ‘good’ is what our brain considers is ‘good’, though, we merely encapsulate this complexity in a convenient black box.
That’s not to say that it’s not a solution, though: have you looked into desire utilitarianism? What you’re proposing here is really similar to (as I understand it) that school of moral philosophy claims. If you have time, Fyfe’s A Better Place is a good introduction.