I accept this analysis of what I wrote. In the attempt to be concise, I haven’t really said what I meant very clearly.
I don’t mean that “we can’t say anything about AI”, but what I am saying is that we are a very long way from being able to say anything particularly worth saying about AI.
By which I mean to say that we are in some situation analogous to that of a 19th century weather forecaster, trying to predict the following week’s weather. It’s worth pushing the quality of the tools and the analysis, but don’t expect any useful, real-world applicable information for a few lifetimes. And my confidence goes up the more I think about it.
Which, in the context of the audience of LW, I hoped would be seen as more contrarian than it has been! Perhaps this clarification will help.
So when you say “speculative” you mean “generations-away speculation”?
I agree that I didn’t really understand what your intent was from your post. If you were to say something along the lines of “AI is far enough away (on the tech-tree) that the predictions of current researchers shouldn’t be taken into account by those who eventually design it” then I would disagree because it seems substantially overconfident. Is that about right?
The nature of AI is that it is inherently sufficiently complex, that, although we may well be able to get better at predicting the kinds of characteristics that might result from implementation, the actuality of implementation will likely not just surprise us, but confound us.
I’m saying that any attempt to develop approaches that lead to Friendly AI , while they are surely interesting and as worthwhile as any other attempts to push understanding, cannot be relied on by implementers of AI as more than hopeful pointers.
It’s the relationship between the inevitable surprise and the attitude of researchers that is at the core of what I was trying to say, but having started out attempting to be contrarian, I’ve ended up risking appearing mean. I’m going to stop here.
I accept this analysis of what I wrote. In the attempt to be concise, I haven’t really said what I meant very clearly.
I don’t mean that “we can’t say anything about AI”, but what I am saying is that we are a very long way from being able to say anything particularly worth saying about AI.
By which I mean to say that we are in some situation analogous to that of a 19th century weather forecaster, trying to predict the following week’s weather. It’s worth pushing the quality of the tools and the analysis, but don’t expect any useful, real-world applicable information for a few lifetimes. And my confidence goes up the more I think about it.
Which, in the context of the audience of LW, I hoped would be seen as more contrarian than it has been! Perhaps this clarification will help.
So when you say “speculative” you mean “generations-away speculation”?
I agree that I didn’t really understand what your intent was from your post. If you were to say something along the lines of “AI is far enough away (on the tech-tree) that the predictions of current researchers shouldn’t be taken into account by those who eventually design it” then I would disagree because it seems substantially overconfident. Is that about right?
Um. I’ve still failed to be clear.
The nature of AI is that it is inherently sufficiently complex, that, although we may well be able to get better at predicting the kinds of characteristics that might result from implementation, the actuality of implementation will likely not just surprise us, but confound us.
I’m saying that any attempt to develop approaches that lead to Friendly AI , while they are surely interesting and as worthwhile as any other attempts to push understanding, cannot be relied on by implementers of AI as more than hopeful pointers.
It’s the relationship between the inevitable surprise and the attitude of researchers that is at the core of what I was trying to say, but having started out attempting to be contrarian, I’ve ended up risking appearing mean. I’m going to stop here.