Because I want to be sure that I’m understanding what the claim you’re making is.
The Convergentist would want to claim:
“To assert the Orthogonality Thesis is to assert that no matter how intelligent and rational an agent, no matter the breadth of its understanding, no matter the strength of its commitment to objectivity, no matter its abilities to self-reflect and update, it would still never realise that making huge numbers of paperclips is arbitrary and unworthy of its abilities”
Okay...so I agree with the Convergence theorist on what the implications of the Orthogonality Thesis are, and I still think the Orthogonality Thesis is true.
if it relates to all or most or typical rational intelligent agents, because that is how moral realists define their claim
Hold on now...that makes the claim completely different than what I thought we were talking about up till now. I thought we were talking about whether or not all rational agents would be in agreement about what morality is, independent of specifically human preferences?
We can have the other discussion too...but not before settling whether or not the Orthogonality Thesis is in fact true “in principle”. Remember, we originally started this discussion with my claim that morality is feelings/preference, as opposed to something you could figure out (i.e. something embedded into logic/game theory or the universe itself.) We weren’t originally talking about rational agents to shed light on evolution or plausible AI...we brought them in as hypothetical agents who converge upon the correct answer to any answerable question, to explore whether or not “what is good” is independent from “what do humans think is good”.
I thought we were talking about whether morality was something that could be discovered objectively.
I said:
Morality comes from the “heart”. It’s made of feelings.
Then you said:
People use feelings/System1 to do morality. That doesn’t make it an oracle. Thinking might be more accurate.
Then I said
Accurate? How can you speak of a moral preference being “accurate” or not? Moral preferences simply are. Accurate? How can you speak of a moral preference being “accurate” or not? Moral preferences simply are.
You disagreed, and said
Moral objectivism isn’t obviously wrong,
To which I countered
all rational agents will converge upon mathematical statements, and will not converge upon moral statements.
You disagreed:
morality could work like convergence on mathematical truth.
Which is why
I thought we were talking about whether or not all rational agents would be in agreement about what morality is, independent of specifically human preferences?
Hence
if it relates to all or most or typical rational intelligent agents
doesn’t make any sense in our discussion. All rational agents converge on mathematical and ontological facts, by definition. My argument was that there is no such thing as a “moral fact: and moral statements can only be discussed when in reference to the psychology of a small set of creatures which includes humans and some other mammals. I argued that moral statements can’t be “discovered” true or false in any ontological or mathematical sense, nor are they deeply embedded into game theory (meaning it is not always in the interest of all rational agents to follow human morality) - even though game theory does explain how we evolved morality given our circumstances.
If you admit that at least one of all possible rational agent doesn’t converge upon morality, you’ve been in agreement with me this entire time—which means we’ve been talking about different things this entire time...so what did you think we were talking about?
All rational agents converge on mathematical and ontological facts, by definition.
Only by a definition whereby “rational” means “ideally rational”. In the ordinary sense of the term, it perfectly possible for someone who is deemed “rational” in a more-or-less, good-enough sense to fail to understand some mathematical truths. The existence of the innumerate does not disprove the objectivity of mathematics, and the existence of sociopaths does not disprove the objectivity of morality.
If you admit that at least one of all possible rational agent doesn’t converge upon morality,
Do you believe that it is possible for a rational agent to fail to understand a mathematical truth? Because that seems rather commonplace to me. Unless you mean ideally rational....
The whole point of invoking an ideal rational agent in the first place was to demonstrate that moral “truths” aren’t like empirical or mathematical truths in that you can’t discover them objectively through philosophy or mathematics (even if you are infinitely smart). Rather, moral “truths” are peculiar to humans.
If you want to illustrate the non-objectivity of morality, then stating that even ideal rational agents won’t converge on them is one of expressing the point, although it helps to state the “ideal” explicitly. However, that is still only the expression of a claim, not the “demonstration” of one.
Because I want to be sure that I’m understanding what the claim you’re making is.
Okay...so I agree with the Convergence theorist on what the implications of the Orthogonality Thesis are, and I still think the Orthogonality Thesis is true.
Hold on now...that makes the claim completely different than what I thought we were talking about up till now. I thought we were talking about whether or not all rational agents would be in agreement about what morality is, independent of specifically human preferences?
We can have the other discussion too...but not before settling whether or not the Orthogonality Thesis is in fact true “in principle”. Remember, we originally started this discussion with my claim that morality is feelings/preference, as opposed to something you could figure out (i.e. something embedded into logic/game theory or the universe itself.) We weren’t originally talking about rational agents to shed light on evolution or plausible AI...we brought them in as hypothetical agents who converge upon the correct answer to any answerable question, to explore whether or not “what is good” is independent from “what do humans think is good”.
I don’t see how. What did you think we were talking about?
I thought we were talking about whether morality was something that could be discovered objectively.
I said:
Then you said:
Then I said
You disagreed, and said
To which I countered
You disagreed:
Which is why
Hence
doesn’t make any sense in our discussion. All rational agents converge on mathematical and ontological facts, by definition. My argument was that there is no such thing as a “moral fact: and moral statements can only be discussed when in reference to the psychology of a small set of creatures which includes humans and some other mammals. I argued that moral statements can’t be “discovered” true or false in any ontological or mathematical sense, nor are they deeply embedded into game theory (meaning it is not always in the interest of all rational agents to follow human morality) - even though game theory does explain how we evolved morality given our circumstances.
If you admit that at least one of all possible rational agent doesn’t converge upon morality, you’ve been in agreement with me this entire time—which means we’ve been talking about different things this entire time...so what did you think we were talking about?
Only by a definition whereby “rational” means “ideally rational”. In the ordinary sense of the term, it perfectly possible for someone who is deemed “rational” in a more-or-less, good-enough sense to fail to understand some mathematical truths. The existence of the innumerate does not disprove the objectivity of mathematics, and the existence of sociopaths does not disprove the objectivity of morality.
Do you believe that it is possible for a rational agent to fail to understand a mathematical truth? Because that seems rather commonplace to me. Unless you mean ideally rational....
I did mean ideally rational.
The whole point of invoking an ideal rational agent in the first place was to demonstrate that moral “truths” aren’t like empirical or mathematical truths in that you can’t discover them objectively through philosophy or mathematics (even if you are infinitely smart). Rather, moral “truths” are peculiar to humans.
If you want to illustrate the non-objectivity of morality, then stating that even ideal rational agents won’t converge on them is one of expressing the point, although it helps to state the “ideal” explicitly. However, that is still only the expression of a claim, not the “demonstration” of one.