So when you say “speculative” you mean “generations-away speculation”?
I agree that I didn’t really understand what your intent was from your post. If you were to say something along the lines of “AI is far enough away (on the tech-tree) that the predictions of current researchers shouldn’t be taken into account by those who eventually design it” then I would disagree because it seems substantially overconfident. Is that about right?
The nature of AI is that it is inherently sufficiently complex, that, although we may well be able to get better at predicting the kinds of characteristics that might result from implementation, the actuality of implementation will likely not just surprise us, but confound us.
I’m saying that any attempt to develop approaches that lead to Friendly AI , while they are surely interesting and as worthwhile as any other attempts to push understanding, cannot be relied on by implementers of AI as more than hopeful pointers.
It’s the relationship between the inevitable surprise and the attitude of researchers that is at the core of what I was trying to say, but having started out attempting to be contrarian, I’ve ended up risking appearing mean. I’m going to stop here.
So when you say “speculative” you mean “generations-away speculation”?
I agree that I didn’t really understand what your intent was from your post. If you were to say something along the lines of “AI is far enough away (on the tech-tree) that the predictions of current researchers shouldn’t be taken into account by those who eventually design it” then I would disagree because it seems substantially overconfident. Is that about right?
Um. I’ve still failed to be clear.
The nature of AI is that it is inherently sufficiently complex, that, although we may well be able to get better at predicting the kinds of characteristics that might result from implementation, the actuality of implementation will likely not just surprise us, but confound us.
I’m saying that any attempt to develop approaches that lead to Friendly AI , while they are surely interesting and as worthwhile as any other attempts to push understanding, cannot be relied on by implementers of AI as more than hopeful pointers.
It’s the relationship between the inevitable surprise and the attitude of researchers that is at the core of what I was trying to say, but having started out attempting to be contrarian, I’ve ended up risking appearing mean. I’m going to stop here.