I don’t think it’s fair to characterize the Orthogonality Thesis as saying that there is no correlation. Instead it is saying that there isn’t a perfect correlation, or maybe (stronger version) that there isn’t a strong enough correlation that we can count on superhuman AIs probably having similar-to-human values by default.
That’s the main problem with the orthogonality thesis, it so vague. The thesis that there isn’t a perfect correlation is extremely weak and uninteresting.
Nevertheless, some people still needed to hear it! I have personally talked with probably a dozen people who were like “But if it’s so smart, won’t it understand what we meant / what it’s purpose is?” or “But if it’s so smart, won’t it realize that killing humans is wrong, and that instead it should cooperate and share the wealth?”
Yes, most people seem to reject the stronger version, they think a superintelligent AI is unlikely to kill all humans. Given the context of the original question here, this seems to be understandable: In humans, higher IQ is correlated with lower antisocial and criminal behavior and lower violence – things which we typically judge to be immoral. I agree there are good philosophical reasons supporting the strong orthogonality thesis for artificial intelligence, but I think we have so far not sufficiently engaged with the literature from criminology and IQ research, which provides evidence in the opposite direction.
It doesn’t seem worth engaging with to me. Yes, there’s a correlation between IQ and antisocial and criminal behavior. If anyone seriously thinks we should just extrapolate that correlation all the way up to machine superintelligence (and from antisocial-and-criminal-behavior to human-values-more-generally) & then call it a day, they should really put that idea down in writing and defend it, and in the course of doing so they’ll probably notice the various holes in it.
Analogy: There’s a correlation between how big rockets are and how safe rockets are. The bigger ones like Saturn 5 tend to blow up less than the smaller rockets made by scrappy startups, and really small rockets used in warfare blow up all the time. So should we then just slap together a suuuuper big rocket, a hundred times bigger than Saturn 5, and trust that it’ll be safe? Hell no, that’s a bad idea not worth engaging with. IMO the suggestion criminology-IQ research should make us optimistic about machine superintelligence is similarly bad for similar reasons.
I guess larger rockets are safer because more money is invested in testing them, since an explosion gets more expensive the larger the rocket is. But there seems to be no analogous argument which explains why smarter human brains are safer. It doesn’t seem they are tested better. If the strong orthogonality thesis is true for artificial intelligence, there should be a positive explanation for why it is apparently not true for human intelligence.
I don’t think it’s fair to characterize the Orthogonality Thesis as saying that there is no correlation. Instead it is saying that there isn’t a perfect correlation, or maybe (stronger version) that there isn’t a strong enough correlation that we can count on superhuman AIs probably having similar-to-human values by default.
That’s the main problem with the orthogonality thesis, it so vague. The thesis that there isn’t a perfect correlation is extremely weak and uninteresting.
Nevertheless, some people still needed to hear it! I have personally talked with probably a dozen people who were like “But if it’s so smart, won’t it understand what we meant / what it’s purpose is?” or “But if it’s so smart, won’t it realize that killing humans is wrong, and that instead it should cooperate and share the wealth?”
Yes, most people seem to reject the stronger version, they think a superintelligent AI is unlikely to kill all humans. Given the context of the original question here, this seems to be understandable: In humans, higher IQ is correlated with lower antisocial and criminal behavior and lower violence – things which we typically judge to be immoral. I agree there are good philosophical reasons supporting the strong orthogonality thesis for artificial intelligence, but I think we have so far not sufficiently engaged with the literature from criminology and IQ research, which provides evidence in the opposite direction.
It doesn’t seem worth engaging with to me. Yes, there’s a correlation between IQ and antisocial and criminal behavior. If anyone seriously thinks we should just extrapolate that correlation all the way up to machine superintelligence (and from antisocial-and-criminal-behavior to human-values-more-generally) & then call it a day, they should really put that idea down in writing and defend it, and in the course of doing so they’ll probably notice the various holes in it.
Analogy: There’s a correlation between how big rockets are and how safe rockets are. The bigger ones like Saturn 5 tend to blow up less than the smaller rockets made by scrappy startups, and really small rockets used in warfare blow up all the time. So should we then just slap together a suuuuper big rocket, a hundred times bigger than Saturn 5, and trust that it’ll be safe? Hell no, that’s a bad idea not worth engaging with. IMO the suggestion criminology-IQ research should make us optimistic about machine superintelligence is similarly bad for similar reasons.
I guess larger rockets are safer because more money is invested in testing them, since an explosion gets more expensive the larger the rocket is. But there seems to be no analogous argument which explains why smarter human brains are safer. It doesn’t seem they are tested better. If the strong orthogonality thesis is true for artificial intelligence, there should be a positive explanation for why it is apparently not true for human intelligence.