I don’t think the question is whether intelligence is objective, but whether it’s linear and one-dimensional. I suspect that the orthogonality thesis is getting some evidence with GPT, in that they seem to be intelligent on many dimensions, but their goals are alien (or perhaps nonexistent).
Yes, but none of the potential readers of this post will think intelligence is one-dimensional, so pointing it out wouldn’t have the potential to educate anyone. I disagree with the notion that “good writing” is about convincing the reader that I’m a good reasoner. The reader should be thinking “is there something interesting I can learn from this post?” but usually there’s a lot of “does this author demonstrate sufficient epistemic virtue for me to feel ok admitting to myself that I’ve learned something?”
Good writing means not worrying about justifying yourself; and efficient reading means only caring about what you can learn, not what you aren’t learning.
I don’t think the question is whether intelligence is objective, but whether it’s linear and one-dimensional. I suspect that the orthogonality thesis is getting some evidence with GPT, in that they seem to be intelligent on many dimensions, but their goals are alien (or perhaps nonexistent).
Yes, but none of the potential readers of this post will think intelligence is one-dimensional, so pointing it out wouldn’t have the potential to educate anyone. I disagree with the notion that “good writing” is about convincing the reader that I’m a good reasoner. The reader should be thinking “is there something interesting I can learn from this post?” but usually there’s a lot of “does this author demonstrate sufficient epistemic virtue for me to feel ok admitting to myself that I’ve learned something?”
Good writing means not worrying about justifying yourself; and efficient reading means only caring about what you can learn, not what you aren’t learning.
“Rule Thinkers In, Not Out”⇒ “Rule Ideas In And Don’t Judge Them By Association”