and the human brain is perhaps the third most complex phenomena yet encountered by humans [ brain is a subset of ecosystem is a subset of universe]
and a characteristic of complexity is that prediction of outcomes requires greater computational resource than is required to simply let the system provide its own answer,
any attempt to predict the outcome of a successful AI implementation is speculative. 80% confident
Either you’re saying “we can’t say anything about AI” which seems clearly false, or you’re saying “an AI will surprise us” which seems clearly true.
Depending on what you mean by speculative, you’re either overconfident or underconfident, but I can’t imagine a proposition that is “in between” enough to be 80% likely.
I accept this analysis of what I wrote. In the attempt to be concise, I haven’t really said what I meant very clearly.
I don’t mean that “we can’t say anything about AI”, but what I am saying is that we are a very long way from being able to say anything particularly worth saying about AI.
By which I mean to say that we are in some situation analogous to that of a 19th century weather forecaster, trying to predict the following week’s weather. It’s worth pushing the quality of the tools and the analysis, but don’t expect any useful, real-world applicable information for a few lifetimes. And my confidence goes up the more I think about it.
Which, in the context of the audience of LW, I hoped would be seen as more contrarian than it has been! Perhaps this clarification will help.
I accept this analysis of what I wrote. In the attempt to be concise, I haven’t really said what I meant very clearly.
I don’t mean that “we can’t say anything about AI”, but what I am saying is that we are a very long way from being able to say anything particularly worth saying about AI.
By which I mean to say that we are in some situation analogous to that of a 19th century weather forecaster, trying to predict the following week’s weather. It’s worth pushing the quality of the tools and the analysis, but don’t expect any useful, real-world applicable information for a few lifetimes. And my confidence goes up the more I think about it.
Which, in the context of the audience of LW, I hoped would be seen as more contrarian than it has been! Perhaps this clarification will help.
So when you say “speculative” you mean “generations-away speculation”?
I agree that I didn’t really understand what your intent was from your post. If you were to say something along the lines of “AI is far enough away (on the tech-tree) that the predictions of current researchers shouldn’t be taken into account by those who eventually design it” then I would disagree because it seems substantially overconfident. Is that about right?
The nature of AI is that it is inherently sufficiently complex, that, although we may well be able to get better at predicting the kinds of characteristics that might result from implementation, the actuality of implementation will likely not just surprise us, but confound us.
I’m saying that any attempt to develop approaches that lead to Friendly AI , while they are surely interesting and as worthwhile as any other attempts to push understanding, cannot be relied on by implementers of AI as more than hopeful pointers.
It’s the relationship between the inevitable surprise and the attitude of researchers that is at the core of what I was trying to say, but having started out attempting to be contrarian, I’ve ended up risking appearing mean. I’m going to stop here.
As:
formal complexity [http://en.wikipedia.org/wiki/Complexity#Specific_meanings_of_complexity] is inherent in may real-world systems that are apparently significantly simpler than the human brain,
and the human brain is perhaps the third most complex phenomena yet encountered by humans [ brain is a subset of ecosystem is a subset of universe]
and a characteristic of complexity is that prediction of outcomes requires greater computational resource than is required to simply let the system provide its own answer,
any attempt to predict the outcome of a successful AI implementation is speculative. 80% confident
Either you’re saying “we can’t say anything about AI” which seems clearly false, or you’re saying “an AI will surprise us” which seems clearly true.
Depending on what you mean by speculative, you’re either overconfident or underconfident, but I can’t imagine a proposition that is “in between” enough to be 80% likely.
I accept this analysis of what I wrote. In the attempt to be concise, I haven’t really said what I meant very clearly.
I don’t mean that “we can’t say anything about AI”, but what I am saying is that we are a very long way from being able to say anything particularly worth saying about AI.
By which I mean to say that we are in some situation analogous to that of a 19th century weather forecaster, trying to predict the following week’s weather. It’s worth pushing the quality of the tools and the analysis, but don’t expect any useful, real-world applicable information for a few lifetimes. And my confidence goes up the more I think about it.
Which, in the context of the audience of LW, I hoped would be seen as more contrarian than it has been! Perhaps this clarification will help.
I accept this analysis of what I wrote. In the attempt to be concise, I haven’t really said what I meant very clearly.
I don’t mean that “we can’t say anything about AI”, but what I am saying is that we are a very long way from being able to say anything particularly worth saying about AI.
By which I mean to say that we are in some situation analogous to that of a 19th century weather forecaster, trying to predict the following week’s weather. It’s worth pushing the quality of the tools and the analysis, but don’t expect any useful, real-world applicable information for a few lifetimes. And my confidence goes up the more I think about it.
Which, in the context of the audience of LW, I hoped would be seen as more contrarian than it has been! Perhaps this clarification will help.
So when you say “speculative” you mean “generations-away speculation”?
I agree that I didn’t really understand what your intent was from your post. If you were to say something along the lines of “AI is far enough away (on the tech-tree) that the predictions of current researchers shouldn’t be taken into account by those who eventually design it” then I would disagree because it seems substantially overconfident. Is that about right?
Um. I’ve still failed to be clear.
The nature of AI is that it is inherently sufficiently complex, that, although we may well be able to get better at predicting the kinds of characteristics that might result from implementation, the actuality of implementation will likely not just surprise us, but confound us.
I’m saying that any attempt to develop approaches that lead to Friendly AI , while they are surely interesting and as worthwhile as any other attempts to push understanding, cannot be relied on by implementers of AI as more than hopeful pointers.
It’s the relationship between the inevitable surprise and the attitude of researchers that is at the core of what I was trying to say, but having started out attempting to be contrarian, I’ve ended up risking appearing mean. I’m going to stop here.