This exchange significantly decreased my probability that Ben Goertzel is a careful thinker about AI problems. I think he has a good point about “rationalists” being too much invested in “rationality” (as opposed to rationality), but his AI thoughts are just seriously wtf. In tune with the Cosmos? Does this mean anything at all? I hate to say it based on a short conversation, but it looks like Ben Goertzel hasn’t made any of his intuitions precise enough to even be wrong. And he makes the classic mistake of thinking “any intelligence” would avoid certain goal-types (i.e. ‘fill the future light cone with some type of substance’) because they’re… stupid? I don’t even...
He published a book called A Cosmist Manifesto which presumably describes some of his thoughts in more detail. It looked too new-age for me to take much interest.
Goertzel’s belief in AI FOOMs coupled with his beliefs in psi phenomena and the inherent stupidity of paperclipping made me lower my confidence in the likelihood of AI FOOMs slightly. Was this a reasonable operation, do you think?
If you were previously aware of Goertzel’s belief in AI FOOM but not his opinions on psi/paperclipping then you should lower your confidence slightly. (Exactly how much depends on what other evidence/opinions you have to hand).
If the SIAI were wheeling out Goertzel as an example of “look, here’s someone who believes in FOOM” then it should lower your confidence
If you were previously unaware of Goertzel’s belief in FOOM then it should probably increase your confidence very slightly. Reversed stupidity is not intelligence
Obviously the quanitity of “slightly” depends on what other evidence/opinions you have to hand.
This is a good analysis. I was previously weakly aware of Goertzel’s beliefs on psi/paperclipping, and didn’t know much about his opinions on AI other than that he was working on superhuman AGI but didn’t have as much concern for Friendliness as SIAI. So I suppose my confidence shouldn’t change very much either way. I’m still on the fence on several questions related to Singularitarianism, so I’m trying to get evidence wherever I can find it.
This exchange significantly decreased my probability that Ben Goertzel is a careful thinker about AI problems. I think he has a good point about “rationalists” being too much invested in “rationality” (as opposed to rationality), but his AI thoughts are just seriously wtf. In tune with the Cosmos? Does this mean anything at all? I hate to say it based on a short conversation, but it looks like Ben Goertzel hasn’t made any of his intuitions precise enough to even be wrong. And he makes the classic mistake of thinking “any intelligence” would avoid certain goal-types (i.e. ‘fill the future light cone with some type of substance’) because they’re… stupid? I don’t even...
Quoth Yvain:
He published a book called A Cosmist Manifesto which presumably describes some of his thoughts in more detail. It looked too new-age for me to take much interest.
Upvoted.
Goertzel’s belief in AI FOOMs coupled with his beliefs in psi phenomena and the inherent stupidity of paperclipping made me lower my confidence in the likelihood of AI FOOMs slightly. Was this a reasonable operation, do you think?
It depends.
If you were previously aware of Goertzel’s belief in AI FOOM but not his opinions on psi/paperclipping then you should lower your confidence slightly. (Exactly how much depends on what other evidence/opinions you have to hand).
If the SIAI were wheeling out Goertzel as an example of “look, here’s someone who believes in FOOM” then it should lower your confidence
If you were previously unaware of Goertzel’s belief in FOOM then it should probably increase your confidence very slightly. Reversed stupidity is not intelligence
Obviously the quanitity of “slightly” depends on what other evidence/opinions you have to hand.
This is a good analysis. I was previously weakly aware of Goertzel’s beliefs on psi/paperclipping, and didn’t know much about his opinions on AI other than that he was working on superhuman AGI but didn’t have as much concern for Friendliness as SIAI. So I suppose my confidence shouldn’t change very much either way. I’m still on the fence on several questions related to Singularitarianism, so I’m trying to get evidence wherever I can find it.