If only there had been detailed critical analysis of claims (1) and (2) on Less Wrong or the SIAI website I would find your comment compelling. But in light of the fact that detailed critical analysis of these significant claims has not taken place I believe that Eliezer’s remarks are in fact properly conceptualized as an appeal to authority.
Just as Grothendieck’s algebro-geometric achievements had no bearing on Grothendieck’s ability to conceptualize a good plan to lower existential risk, so too does Eliezer’s ability to interpret quantum mechanics have no bearing on Eliezer’s ability to conceptualize a good plan to lower existential risk.
The first part is giving an example of high IQ not leading to a good existential risk plan, and the second part is saying that you expect that high ability to weigh evidence won’t lead to a good plan either.
The counterexample proves that high IQ isn’t everything one needs, but overall, I’d still expect it to help. I think “no bearing” is too strong even for an IQ->IQ comparison of that sort.
If you’re going to assume you’ve been exposed to all the plans that people have come up with, picking the right plan is more of a claim evaluation job than a novel hypothesis generation job. For this, you’re going to want someone that can evaluate claims like MWI easily. I think that this is sufficiently close to the case to make your comparison a poor one.
If I were going to make a comparison to make your point (to the degree which I agree with it), I’d use more than one person with more than one strength of intellect and instead ask “do we really think EY has shown enough to succeed where most talented people fail?”. I’d also try to make it clear whether I’m arguing against him having a ‘majority’ of the probability mass in his favor vs having a ‘plurality’ of it going for him. It’s a lot easier to argue against the former, but it’s the latter that is more important if you have to pick someone to give money to.
But how well does the ability to evaluate evidence connected with quantum mechanics correlate with ability to evaluate evidence connected with existential risk?
If only there had been detailed critical analysis of claims (1) and (2) on Less Wrong or the SIAI website I would find your comment compelling. But in light of the fact that detailed critical analysis of these significant claims has not taken place I believe that Eliezer’s remarks are in fact properly conceptualized as an appeal to authority.
I totally agree that it’s an appeal to authority. My point was that it’s an appeal to a different and more relevant kind of authority.
Do you disagree with
?
If so, why?
Yes, I mostly disagree.
The first part is giving an example of high IQ not leading to a good existential risk plan, and the second part is saying that you expect that high ability to weigh evidence won’t lead to a good plan either.
The counterexample proves that high IQ isn’t everything one needs, but overall, I’d still expect it to help. I think “no bearing” is too strong even for an IQ->IQ comparison of that sort.
If you’re going to assume you’ve been exposed to all the plans that people have come up with, picking the right plan is more of a claim evaluation job than a novel hypothesis generation job. For this, you’re going to want someone that can evaluate claims like MWI easily. I think that this is sufficiently close to the case to make your comparison a poor one.
If I were going to make a comparison to make your point (to the degree which I agree with it), I’d use more than one person with more than one strength of intellect and instead ask “do we really think EY has shown enough to succeed where most talented people fail?”. I’d also try to make it clear whether I’m arguing against him having a ‘majority’ of the probability mass in his favor vs having a ‘plurality’ of it going for him. It’s a lot easier to argue against the former, but it’s the latter that is more important if you have to pick someone to give money to.
But how well does the ability to evaluate evidence connected with quantum mechanics correlate with ability to evaluate evidence connected with existential risk?
See also the thread here