EY argues: ”… your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don’t.”
and you respond by saying that there have been people smarter than Eliezer that have suffered rationality fails when working outside their domain? Isn’t that kinda the point?
EY wasn’t arguing “My IQ is so damn high that I just have to be right. Look at my ability to generate novel hypothesis! It clearly shows high IQ!”, which would indeed be foolish. It is understood here that high innate intelligence is not the same as real world effectiveness, which requires one be intelligent about how they use their intelligence.
The object of the game here is to evaluate hypothesis which have already been generated (ie SIAI claims). EY was showing that there are many very smart people that can’t even evaluate the MWI hypothesis when it is handed to them and there is slam dunk evidence.
If you can’t even get the right answer on simple questions, how the heck are you supposed to do better on tough problems than those that see the simple problems as, well… simple?
EDIT: It seems like my point did not come off clearly. I am not arguing that it is not an appeal to authority.
I am arguing that high IQ is different from “has lots of knowledge” which is different from “knows the fundamental rules of how to weigh evidence and evaluate claims”, and that Eliezer was talking about the last one.
More specifically, XiXiDu’s whole point was “how do I evaluate this if, instead of addressing the arguments behind it, I talk about who believes it and who doesn’t?” If that’s the argument, it’s fair enough for Eliezer to ask them to assess the rationality of the people whose opinions are being weighed.
More specifically my point regarding other peoples beliefs was that there are people who know about the topic of superhuman AI and related risks but, judged by their less or non-existing campaigns to prevent the risks, came to different conclusions.
In the case of other people like Marvin Minsky and other AI researchers, amongst others, the knowledge of possible risks should be reasonable to infer from their overall knowledge of the topic.
EY was showing that there are many very smart people that can’t even evaluate the MWI hypothesis.
Many scientists disregard speculations concerning the interpretation of quantum mechanics. This is due to the fact that it does not bear additional predictions, i.e. is not subject to empirical criticism.
EY wasn’t arguing “My IQ is so damn high that I just have to be right.
More specifically my point regarding other peoples beliefs was that there are people who know about the topic of superhuman AI and related risks but, judged by their less or non-existing campaigns to prevent the risks, came to different conclusions.
In the case of AI researchers like Marvin Minsky, amongst others, the knowledge of possible risks should be reasonable to infer from their overall familiarity with the topic.
EY wasn’t arguing “My IQ is so damn high that I just have to be right.
You keep posting screenshots from the deleted Roko’s post, with the “forbidden” parts blacked-out. I agree that the whole matter could have been handled much better, but I don’t see how it or the other quoted line bears on the interpretation of the sentence quoted at the top of jimmy’s post. Also, people have asked you several times to stop reminding them of the deleted post and the need for quotes proving that EY thinks highly of his intelligence can be satisfied without doing that. Seriously, they’reeverywhere.
XiXiDu argues: ”… your smart friends at Less Wrong and favorite rationalists like EY are not remotely close to the rationality standards of other people out there (yeah, there are other smart people, believe it or not), and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don’t.”
You keep telling me that my arguments are no evidence for what I’m trying to prove. Other people asked me several times not to make up fantasies of AI-Gods kicking their testicles. But if you want to be upovted the winning move is just to go think about something else. So take my word for it, I know more than you do, no really I do, and SHUT UP.
If only claims (1) and (2) had been critically analyzed in detail on Less Wrong or the SIAI website I would find your comment compelling. Given that such analysis has not been made or released, I interpret Eliezer’s response as an argument by authority.
If only there had been detailed critical analysis of claims (1) and (2) on Less Wrong or the SIAI website I would find your comment compelling. But in light of the fact that detailed critical analysis of these significant claims has not taken place I believe that Eliezer’s remarks are in fact properly conceptualized as an appeal to authority.
Just as Grothendieck’s algebro-geometric achievements had no bearing on Grothendieck’s ability to conceptualize a good plan to lower existential risk, so too does Eliezer’s ability to interpret quantum mechanics have no bearing on Eliezer’s ability to conceptualize a good plan to lower existential risk.
The first part is giving an example of high IQ not leading to a good existential risk plan, and the second part is saying that you expect that high ability to weigh evidence won’t lead to a good plan either.
The counterexample proves that high IQ isn’t everything one needs, but overall, I’d still expect it to help. I think “no bearing” is too strong even for an IQ->IQ comparison of that sort.
If you’re going to assume you’ve been exposed to all the plans that people have come up with, picking the right plan is more of a claim evaluation job than a novel hypothesis generation job. For this, you’re going to want someone that can evaluate claims like MWI easily. I think that this is sufficiently close to the case to make your comparison a poor one.
If I were going to make a comparison to make your point (to the degree which I agree with it), I’d use more than one person with more than one strength of intellect and instead ask “do we really think EY has shown enough to succeed where most talented people fail?”. I’d also try to make it clear whether I’m arguing against him having a ‘majority’ of the probability mass in his favor vs having a ‘plurality’ of it going for him. It’s a lot easier to argue against the former, but it’s the latter that is more important if you have to pick someone to give money to.
But how well does the ability to evaluate evidence connected with quantum mechanics correlate with ability to evaluate evidence connected with existential risk?
EY argues: ”… your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don’t.”
and you respond by saying that there have been people smarter than Eliezer that have suffered rationality fails when working outside their domain? Isn’t that kinda the point?
EY wasn’t arguing “My IQ is so damn high that I just have to be right. Look at my ability to generate novel hypothesis! It clearly shows high IQ!”, which would indeed be foolish. It is understood here that high innate intelligence is not the same as real world effectiveness, which requires one be intelligent about how they use their intelligence.
The object of the game here is to evaluate hypothesis which have already been generated (ie SIAI claims). EY was showing that there are many very smart people that can’t even evaluate the MWI hypothesis when it is handed to them and there is slam dunk evidence.
If you can’t even get the right answer on simple questions, how the heck are you supposed to do better on tough problems than those that see the simple problems as, well… simple?
EDIT: It seems like my point did not come off clearly. I am not arguing that it is not an appeal to authority.
I am arguing that high IQ is different from “has lots of knowledge” which is different from “knows the fundamental rules of how to weigh evidence and evaluate claims”, and that Eliezer was talking about the last one.
More specifically, XiXiDu’s whole point was “how do I evaluate this if, instead of addressing the arguments behind it, I talk about who believes it and who doesn’t?” If that’s the argument, it’s fair enough for Eliezer to ask them to assess the rationality of the people whose opinions are being weighed.
More specifically my point regarding other peoples beliefs was that there are people who know about the topic of superhuman AI and related risks but, judged by their less or non-existing campaigns to prevent the risks, came to different conclusions.
Reference: The Singularity: An Appraisal (Video) - Alastair Reynolds, Vernor Vinge, Charles Stross, Karl Schroeder
In the case of other people like Marvin Minsky and other AI researchers, amongst others, the knowledge of possible risks should be reasonable to infer from their overall knowledge of the topic.
Many scientists disregard speculations concerning the interpretation of quantum mechanics. This is due to the fact that it does not bear additional predictions, i.e. is not subject to empirical criticism.
I disagree based on the following evidence:
http://xixidu.net/lw/05.png “At present I do not know of any other person who could do that.” (Reference)
Hypothesis based on shaky conclusions, not on previous evidence.
More specifically my point regarding other peoples beliefs was that there are people who know about the topic of superhuman AI and related risks but, judged by their less or non-existing campaigns to prevent the risks, came to different conclusions.
Reference: The Singularity: An Appraisal (Video) - Alastair Reynolds, Vernor Vinge, Charles Stross, Karl Schroeder
In the case of AI researchers like Marvin Minsky, amongst others, the knowledge of possible risks should be reasonable to infer from their overall familiarity with the topic.
I disagree based on the following evidence:
http://xixidu.net/lw/05.png
“At present I do not know of any other person who could do that.” (Reference)
Hypothesis based on shaky conclusions, not on previous evidence.
You keep posting screenshots from the deleted Roko’s post, with the “forbidden” parts blacked-out. I agree that the whole matter could have been handled much better, but I don’t see how it or the other quoted line bears on the interpretation of the sentence quoted at the top of jimmy’s post. Also, people have asked you several times to stop reminding them of the deleted post and the need for quotes proving that EY thinks highly of his intelligence can be satisfied without doing that. Seriously, they’re everywhere.
XiXiDu argues: ”… your smart friends at Less Wrong and favorite rationalists like EY are not remotely close to the rationality standards of other people out there (yeah, there are other smart people, believe it or not), and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don’t.”
You keep telling me that my arguments are no evidence for what I’m trying to prove. Other people asked me several times not to make up fantasies of AI-Gods kicking their testicles. But if you want to be upovted the winning move is just to go think about something else. So take my word for it, I know more than you do, no really I do, and SHUT UP.
I actually feel embarrassed just from reading that.
See the edit to the original comment.
If only claims (1) and (2) had been critically analyzed in detail on Less Wrong or the SIAI website I would find your comment compelling. Given that such analysis has not been made or released, I interpret Eliezer’s response as an argument by authority.
If only there had been detailed critical analysis of claims (1) and (2) on Less Wrong or the SIAI website I would find your comment compelling. But in light of the fact that detailed critical analysis of these significant claims has not taken place I believe that Eliezer’s remarks are in fact properly conceptualized as an appeal to authority.
I totally agree that it’s an appeal to authority. My point was that it’s an appeal to a different and more relevant kind of authority.
Do you disagree with
?
If so, why?
Yes, I mostly disagree.
The first part is giving an example of high IQ not leading to a good existential risk plan, and the second part is saying that you expect that high ability to weigh evidence won’t lead to a good plan either.
The counterexample proves that high IQ isn’t everything one needs, but overall, I’d still expect it to help. I think “no bearing” is too strong even for an IQ->IQ comparison of that sort.
If you’re going to assume you’ve been exposed to all the plans that people have come up with, picking the right plan is more of a claim evaluation job than a novel hypothesis generation job. For this, you’re going to want someone that can evaluate claims like MWI easily. I think that this is sufficiently close to the case to make your comparison a poor one.
If I were going to make a comparison to make your point (to the degree which I agree with it), I’d use more than one person with more than one strength of intellect and instead ask “do we really think EY has shown enough to succeed where most talented people fail?”. I’d also try to make it clear whether I’m arguing against him having a ‘majority’ of the probability mass in his favor vs having a ‘plurality’ of it going for him. It’s a lot easier to argue against the former, but it’s the latter that is more important if you have to pick someone to give money to.
But how well does the ability to evaluate evidence connected with quantum mechanics correlate with ability to evaluate evidence connected with existential risk?
See also the thread here