The central theme of what you’ve written here is known locally as Egan’s Law, and as applied to metaethics, means, very roughly, that ethical systems should never deviate far from what we already understand to be ethical, or lead to transparently unethical decisions. http://lesswrong.com/lw/sk/changing_your_metaethics/
And, uh. You may want to consider deleting this, and starting over from scratch, with a little less rage and a little more purpose. It’s not immediately apparent even to me, a sympathetic audience from the Objectivist perspective, what you’re trying to forward except a vague anti-intellectualism. It comes off more than a little bit at thumbing your nose at people smarter than you on the basis that smart people have done dumb things before. (I can’t even say that you’re thumbing your nose at people for -thinking- they’re smarter than you, as you seem to suggest that intelligence is itself a fault, justifiable only by its support of the status quo, judging by your comments using Hayek.)
And then I had to find that my naive theory of intelligence didn’t hold water: intelligent people were just as prone as less intelligent people to believing in obviously absurd superstitions.
‘Just as prone’? I would be fascinated to see any evidence beyond the anecdotal for this...
Actually, this kinds of reminds me of Stanovich’s Dysrationalia and also of Eliezer’s “Outside the laboratory”, if only more uncompromising and extreme than those two. Then again, I tend to have a charitable interpretation of what people write.
The problem is, Stanovich’s work (based on his 2010 book which I have) doesn’t support the thesis that intelligent people have more false beliefs or biases than stupid people, or just as many; they have fewer in all but a bare handful of carefully chosen biases where they’re equal or a little worse.
If one had to summarize his work and the associated work in these terms, one could say that it’s all about the question ‘why does IQ not correlate at 1.0 with better beliefs but instead 0.5 or lower?’
No, no, no. The point is: for any fixed set of questions, higher IQ will be positively correlated with believing in better answers. Yet people with higher IQ will develop beliefs about new, bigger and grander questions; and all in all, on their biggest and grandest questions, they fail just as much as lower-IQ people on theirs. Just with more impact. Including more criminal impact when these theories, as they are wont to do, imply the shepherding (and often barbecuing) the mass of their intellectual inferiors.
I’m not sure I share enough of a common definition with the guy about what intelligence is, judging by this post, in order for his statement to even be meaningful to me.
Even so, I suspect that the capacity to effectively rationalize generally exists in roughly direct proportion to the capacity to engage in effective rational thought, so I have to confess that it doesn’t immediately come into apparent conflict with my priors using my own definition.
that ethical systems should never deviate far from what we already understand to be ethical
It’s the problem to conceive ethical future systems. Apparently, a good amount of human values have to be around for precaution. Even if we model computational minds with variable parameters like no social interacion with other people—in scenarios where the local economy in compose only by copies of one person—or the maintenance of weird religions.
The central theme of what you’ve written here is known locally as Egan’s Law, and as applied to metaethics, means, very roughly, that ethical systems should never deviate far from what we already understand to be ethical, or lead to transparently unethical decisions. http://lesswrong.com/lw/sk/changing_your_metaethics/
And, uh. You may want to consider deleting this, and starting over from scratch, with a little less rage and a little more purpose. It’s not immediately apparent even to me, a sympathetic audience from the Objectivist perspective, what you’re trying to forward except a vague anti-intellectualism. It comes off more than a little bit at thumbing your nose at people smarter than you on the basis that smart people have done dumb things before. (I can’t even say that you’re thumbing your nose at people for -thinking- they’re smarter than you, as you seem to suggest that intelligence is itself a fault, justifiable only by its support of the status quo, judging by your comments using Hayek.)
And then there’s the bullshit:
‘Just as prone’? I would be fascinated to see any evidence beyond the anecdotal for this...
Actually, this kinds of reminds me of Stanovich’s Dysrationalia and also of Eliezer’s “Outside the laboratory”, if only more uncompromising and extreme than those two. Then again, I tend to have a charitable interpretation of what people write.
The problem is, Stanovich’s work (based on his 2010 book which I have) doesn’t support the thesis that intelligent people have more false beliefs or biases than stupid people, or just as many; they have fewer in all but a bare handful of carefully chosen biases where they’re equal or a little worse.
If one had to summarize his work and the associated work in these terms, one could say that it’s all about the question ‘why does IQ not correlate at 1.0 with better beliefs but instead 0.5 or lower?’
No, no, no. The point is: for any fixed set of questions, higher IQ will be positively correlated with believing in better answers. Yet people with higher IQ will develop beliefs about new, bigger and grander questions; and all in all, on their biggest and grandest questions, they fail just as much as lower-IQ people on theirs. Just with more impact. Including more criminal impact when these theories, as they are wont to do, imply the shepherding (and often barbecuing) the mass of their intellectual inferiors.
Stupid people seem to have no problem believing in answers to the biggest and grandest questions, like ‘it’s the Jews fault’ or ‘God loves me’.
I’m not sure I share enough of a common definition with the guy about what intelligence is, judging by this post, in order for his statement to even be meaningful to me.
Even so, I suspect that the capacity to effectively rationalize generally exists in roughly direct proportion to the capacity to engage in effective rational thought, so I have to confess that it doesn’t immediately come into apparent conflict with my priors using my own definition.
It’s the problem to conceive ethical future systems. Apparently, a good amount of human values have to be around for precaution. Even if we model computational minds with variable parameters like no social interacion with other people—in scenarios where the local economy in compose only by copies of one person—or the maintenance of weird religions.