Re timelines for climate change, in the 1970s, serious people in the field of climate studies started suggesting that there was a serious problem looming. A very short time later, the entire field was convinced by the evidence and argument for that serious risk—to the point that the IPCC was established in 1988 by the UN.
When did some serious AI researchers start to suggest that there was a serious problem looming? I think in the 2000s. There is no IPAIX-risk.
And, yes: I can detect silly arguments in a reasonable number of cases. But I have not been able to do so in this case as yet (in the aggregate). It seems that there are possibly good arguments on both sides.
It is indeed tricky—I also mentioned that it could get into a regress-like situation. But I think that if people like me are to be convinced it might be worth the attempt. As you say, there may be a more accessible to me domain in there somewhere.
Re the numbers, Eliezer seems to claim that the majority of AI researchers believe in X-risk, but few are speaking out for a variety of reasons. This boils down to me trusting Eliezer’s word about the majority belief, because that majority is not speaking out. He may be motivated to lie in this case—note that I am not saying that he is, but ‘lying for Jesus’ (for example) is a relatively common thing. It is also possible that he is not lying but is wrong—he may have talked to a sample that was biased in some way.
Re timelines for climate change, in the 1970s, serious people in the field of climate studies started suggesting that there was a serious problem looming. A very short time later, the entire field was convinced by the evidence and argument for that serious risk—to the point that the IPCC was established in 1988 by the UN.
When did some serious AI researchers start to suggest that there was a serious problem looming? I think in the 2000s. There is no IPAIX-risk.
Nod. But then, I assume by the 1970s there was already observable evidence of warming? Whereas the observable evidence of AI X-risk in the 2000s seems slim. Like I expect I could tell a story for global warming along the lines of “some people produced a graph with a trend line, and some people came up with theories to explain it”, and for AI X-risk I don’t think we have graphs or trend lines of the same quality.
This isn’t particularly a crux for me btw. But like, there are similarities and differences between these two things, and pointing out the similarities doesn’t really make me expect that looking at one will tell us much about the other.
I think that if people like me are to be convinced it might be worth the attempt. As you say, there may be a more accessible to me domain in there somewhere.
Not opposed to trying, but like...
So I think it’s basically just good to try to explain things more clearly and to try to get to the roots of disagreements. There are lots of ways this can look like. We can imagine a conversation between Eliezer and Yann, or people who respectively agree with them. We can imagine someone currently unconvinced having individual conversations with each side. We can imagine discussions playing out through essays written over the course of months. We can imagine FAQs written by each side which give their answers to the common objections raised by the other. I like all these things.
And maybe in the process of doing these things we eventually find a “they disagree because …” that helps it click for you or for others.
What I’m skeptical about is trying to explain the disagreement rather than discover it. That is, I think “asking Eliezer to explain what’s wrong with Yann’s arguments” works better than “asking Eliezer to explain why Yann disagrees with him”. I think answers I expect to the second question basically just consist of “answers I expect to the first question” plus “Bulverism”.
(Um, having written all that I realize that you might just have been thinking of the same things I like, and describing them in a way that I wouldn’t.)
Re timelines for climate change, in the 1970s, serious people in the field of climate studies started suggesting that there was a serious problem looming. A very short time later, the entire field was convinced by the evidence and argument for that serious risk—to the point that the IPCC was established in 1988 by the UN.
When did some serious AI researchers start to suggest that there was a serious problem looming? I think in the 2000s. There is no IPAIX-risk.
And, yes: I can detect silly arguments in a reasonable number of cases. But I have not been able to do so in this case as yet (in the aggregate). It seems that there are possibly good arguments on both sides.
It is indeed tricky—I also mentioned that it could get into a regress-like situation. But I think that if people like me are to be convinced it might be worth the attempt. As you say, there may be a more accessible to me domain in there somewhere.
Re the numbers, Eliezer seems to claim that the majority of AI researchers believe in X-risk, but few are speaking out for a variety of reasons. This boils down to me trusting Eliezer’s word about the majority belief, because that majority is not speaking out. He may be motivated to lie in this case—note that I am not saying that he is, but ‘lying for Jesus’ (for example) is a relatively common thing. It is also possible that he is not lying but is wrong—he may have talked to a sample that was biased in some way.
Nod. But then, I assume by the 1970s there was already observable evidence of warming? Whereas the observable evidence of AI X-risk in the 2000s seems slim. Like I expect I could tell a story for global warming along the lines of “some people produced a graph with a trend line, and some people came up with theories to explain it”, and for AI X-risk I don’t think we have graphs or trend lines of the same quality.
This isn’t particularly a crux for me btw. But like, there are similarities and differences between these two things, and pointing out the similarities doesn’t really make me expect that looking at one will tell us much about the other.
Not opposed to trying, but like...
So I think it’s basically just good to try to explain things more clearly and to try to get to the roots of disagreements. There are lots of ways this can look like. We can imagine a conversation between Eliezer and Yann, or people who respectively agree with them. We can imagine someone currently unconvinced having individual conversations with each side. We can imagine discussions playing out through essays written over the course of months. We can imagine FAQs written by each side which give their answers to the common objections raised by the other. I like all these things.
And maybe in the process of doing these things we eventually find a “they disagree because …” that helps it click for you or for others.
What I’m skeptical about is trying to explain the disagreement rather than discover it. That is, I think “asking Eliezer to explain what’s wrong with Yann’s arguments” works better than “asking Eliezer to explain why Yann disagrees with him”. I think answers I expect to the second question basically just consist of “answers I expect to the first question” plus “Bulverism”.
(Um, having written all that I realize that you might just have been thinking of the same things I like, and describing them in a way that I wouldn’t.)