Can you explain why this is true? (...) But ceteris paribus, when thinking a topic for the first time, I’d expect the more intelligent person to be at least as accurate as I am.
Intelligence as in “reasoning capability” does not necessarily lead to similar values. As such, arguments that reduce to different terminal values aren’t amenable to compromise. “At least as accurate” doesn’t apply, regardless of intelligence, if fare just states “because I prefer a slower delta of change”. This topic is an ought-debate, not an is-debate.
I’d certainly agree there is some correlation between intelligence and pursuing more “enlightened”/trimmed down (whatever that means) values, but the immediate advantage intelligence confers isn’t in setting those goals, it is in achieving them. If it turned out that the OP just likes his change in smaller increments (a la “I don’t like to constantly adapt”), there’s little that can be said against that, other than “well, I don’t mind radical course corrections”.
but the immediate advantage intelligence confers isn’t in setting those goals, it is in achieving them.
The goals that are sufficiently well defined for lower intelligence may become undefined for higher intelligence. Furthermore, in any accepted metric of intelligence, such as IQ test, we do not consider person’s tendency to procrastinate when trying to attain his stated goals to be part of ‘intelligence’. Furthermore, there’s more than one dimension to it. If you give a person some hallucinogenic drug, you’ll observe the outcome very distinct from simple diminishment of intelligence.
Or in an AI, if you rely on a self contradictory axiomatic system with the minimum length of proof to self contradiction of L, the intelligences that can not explore past L behave just fine while those that explore past L end up being able to prove a statement and it’s opposite. That may be happening in humans with regard to morality. If the primal rules, or the rules of inference are self contradictory, that incapacitates the higher reasoning and leaves the decisions to much less intelligent subsystems, with the intelligence only able to rationalize any action. Or the decision ends up dependent to which of A or ~A has shortest proof, or which proof invokes items that accidentally got cross wired to some sort of feeling of rightness. Either way the outcome looks bizarre and stupid.
Intelligence as in “reasoning capability” does not necessarily lead to similar values
Agreed. That’s why I said “ceteris paribus”—it’s clear that you shouldn’t necessarily trust someone with different terminal values to make a judgement about terminal values. I was mostly referring to factual claims.
Intelligence as in “reasoning capability” does not necessarily lead to similar values. As such, arguments that reduce to different terminal values aren’t amenable to compromise. “At least as accurate” doesn’t apply, regardless of intelligence, if fare just states “because I prefer a slower delta of change”. This topic is an ought-debate, not an is-debate.
I’d certainly agree there is some correlation between intelligence and pursuing more “enlightened”/trimmed down (whatever that means) values, but the immediate advantage intelligence confers isn’t in setting those goals, it is in achieving them. If it turned out that the OP just likes his change in smaller increments (a la “I don’t like to constantly adapt”), there’s little that can be said against that, other than “well, I don’t mind radical course corrections”.
The goals that are sufficiently well defined for lower intelligence may become undefined for higher intelligence. Furthermore, in any accepted metric of intelligence, such as IQ test, we do not consider person’s tendency to procrastinate when trying to attain his stated goals to be part of ‘intelligence’. Furthermore, there’s more than one dimension to it. If you give a person some hallucinogenic drug, you’ll observe the outcome very distinct from simple diminishment of intelligence.
Or in an AI, if you rely on a self contradictory axiomatic system with the minimum length of proof to self contradiction of L, the intelligences that can not explore past L behave just fine while those that explore past L end up being able to prove a statement and it’s opposite. That may be happening in humans with regard to morality. If the primal rules, or the rules of inference are self contradictory, that incapacitates the higher reasoning and leaves the decisions to much less intelligent subsystems, with the intelligence only able to rationalize any action. Or the decision ends up dependent to which of A or ~A has shortest proof, or which proof invokes items that accidentally got cross wired to some sort of feeling of rightness. Either way the outcome looks bizarre and stupid.
Agreed. That’s why I said “ceteris paribus”—it’s clear that you shouldn’t necessarily trust someone with different terminal values to make a judgement about terminal values. I was mostly referring to factual claims.