Lots of smart people believe in post-modern philosophy that denies a physical reality. They have lots of great arguments. Should you, as someone who doesn’t have time to understand all the arguments, believe them?
With how you phrased that question, no, because “lots of smart people believe in X” is trivially true.
I think you’re attempting to draw a parallel that doesn’t exist. What the guy from the article said is “all those people are crazy smart, that’s why they’re wrong.” Are people believing in post-modern philosophy more intelligent than me on average? If not, then there’s no issue.
But there are a set of intellectuals believing in post-modernism are crazy smart in comparison to the average person (this is not hard). Should the average person believe in post-modernism, even if they can’t argue against it? You can point at lots of different intellectual movements that have gone weird ways (Heraclitaen, Pythagoreans, Marxism, Objectivists, Solipsists, Idealists etc). It is very easy to be wrong. I think you should only go off the beaten track if you are also crazy smart and prepared to be wrong.
Frankly I think that most people have no business having confident beliefs about any controversial topics. It’s a bit weird to argue what an average IQ person “should” believe, because, applying a metric like “what is the average IQ of people holding this belief” is not something they’re likely to do. But it would probably yield better results than whatever algorithm they’re using.
Your first sentence isn’t really a sentence so I’m not sure what you were trying so say. I’m also not sure if you’re talking about the same thing I was talking about since you’re using different words. I was talking specifically about the mean IQ of people holding a belief. Is this in fact higher or not?
I concede the point (not sure if you were trying to make it) that a high mean IQ of such a group could be because of filter effects. Let’s say A is the set of all people, B ⊂ A the set of all people who think about Marxism, and C ⊂ B the set of all people who believe in Marxism. Then, even if the mean IQ of B and C are the same, meaning believing in Marxism is not correlated to IQ among those who know about it, the mean IQ of C would still be higher than that of A,, because the mean IQ of B is higher than that of A. because people who even know about Marxism are already smarter than those who don’t.
So that effect is real and I’m sure applies to AI. Now if the claim is just “people who believe in the singularity are disproportionately smart” then that could be explained by the effect, and maybe that’s the only claim the article made, but I got the impression that it also claimed “most people who know about this stuff believe in the singularity” which is a property of C, not B, and can’t be explained away.
I didn’t think you were talking about means of two different populations.… I was mainly making the point that having a population of smarter people than just an individual believing in an idea, wasn’t great evidence for that idea for that individual.
but I got the impression that it also claimed “most people who know about this stuff believe in the singularity”
I didn’t get that impression. But if you expand on what you mean by stuff we can try and get evidence for it one way or another.
At least some of the arguments offered by Richard Rorty in Philosophy and the Mirror of Nature are great. Understanding the arguments takes time because they are specific criticisms of a long tradition of philosophy. A neophyte might respond to his arguments by saying “Well, the position he’s attacking sounds ridiculous anyway, so I don’t see why I should care about his criticisms.” To really appreciate and understand the argument, the reader needs to have sense of why prior philosophers were driven to these seemingly ridiculous positions in the first place, and how their commitment to those positions stems from commitment to other very common-sensical positions (like the correspondence theory of truth). Only then can you appreciate how Rorty’s arguments are really an attack on those common-sensical positions rather than some outre philosophical ideas.
I meant great in the sense of voluminous and hard to pin down where they are wrong (apart from other philosophers skilled in wordplay). Take one of the arguments from an idealist that I think underpin postemodernism Berkely
(1) We perceive ordinary objects (houses, mountains, etc.).
(2) We perceive only ideas.
Therefore,
(3) Ordinary objects are ideas.
I’m not going to argue for this. I’m simply going to argue that for a non-philosopher this form of argument is very hard to distinguish from the stuff in Super-intelligence.
Lots of smart people believe in post-modern philosophy that denies a physical reality. They have lots of great arguments. Should you, as someone who doesn’t have time to understand all the arguments, believe them?
With how you phrased that question, no, because “lots of smart people believe in X” is trivially true.
I think you’re attempting to draw a parallel that doesn’t exist. What the guy from the article said is “all those people are crazy smart, that’s why they’re wrong.” Are people believing in post-modern philosophy more intelligent than me on average? If not, then there’s no issue.
But there are a set of intellectuals believing in post-modernism are crazy smart in comparison to the average person (this is not hard). Should the average person believe in post-modernism, even if they can’t argue against it? You can point at lots of different intellectual movements that have gone weird ways (Heraclitaen, Pythagoreans, Marxism, Objectivists, Solipsists, Idealists etc). It is very easy to be wrong. I think you should only go off the beaten track if you are also crazy smart and prepared to be wrong.
Frankly I think that most people have no business having confident beliefs about any controversial topics. It’s a bit weird to argue what an average IQ person “should” believe, because, applying a metric like “what is the average IQ of people holding this belief” is not something they’re likely to do. But it would probably yield better results than whatever algorithm they’re using.
Your first sentence isn’t really a sentence so I’m not sure what you were trying so say. I’m also not sure if you’re talking about the same thing I was talking about since you’re using different words. I was talking specifically about the mean IQ of people holding a belief. Is this in fact higher or not?
I concede the point (not sure if you were trying to make it) that a high mean IQ of such a group could be because of filter effects. Let’s say A is the set of all people, B ⊂ A the set of all people who think about Marxism, and C ⊂ B the set of all people who believe in Marxism. Then, even if the mean IQ of B and C are the same, meaning believing in Marxism is not correlated to IQ among those who know about it, the mean IQ of C would still be higher than that of A,, because the mean IQ of B is higher than that of A. because people who even know about Marxism are already smarter than those who don’t.
So that effect is real and I’m sure applies to AI. Now if the claim is just “people who believe in the singularity are disproportionately smart” then that could be explained by the effect, and maybe that’s the only claim the article made, but I got the impression that it also claimed “most people who know about this stuff believe in the singularity” which is a property of C, not B, and can’t be explained away.
I didn’t think you were talking about means of two different populations.… I was mainly making the point that having a population of smarter people than just an individual believing in an idea, wasn’t great evidence for that idea for that individual.
I didn’t get that impression. But if you expand on what you mean by stuff we can try and get evidence for it one way or another.
Which of the arguments do you consider to be great? Where do you think it takes a lot of time to understand the arguments well enough to reject them?
At least some of the arguments offered by Richard Rorty in Philosophy and the Mirror of Nature are great. Understanding the arguments takes time because they are specific criticisms of a long tradition of philosophy. A neophyte might respond to his arguments by saying “Well, the position he’s attacking sounds ridiculous anyway, so I don’t see why I should care about his criticisms.” To really appreciate and understand the argument, the reader needs to have sense of why prior philosophers were driven to these seemingly ridiculous positions in the first place, and how their commitment to those positions stems from commitment to other very common-sensical positions (like the correspondence theory of truth). Only then can you appreciate how Rorty’s arguments are really an attack on those common-sensical positions rather than some outre philosophical ideas.
Omg I love you, thanks for promoting Rorty’s work on this platform
I meant great in the sense of voluminous and hard to pin down where they are wrong (apart from other philosophers skilled in wordplay). Take one of the arguments from an idealist that I think underpin postemodernism Berkely
(1) We perceive ordinary objects (houses, mountains, etc.).
(2) We perceive only ideas.
Therefore,
(3) Ordinary objects are ideas.
I’m not going to argue for this. I’m simply going to argue that for a non-philosopher this form of argument is very hard to distinguish from the stuff in Super-intelligence.