I would take it further, though. Given that radically different kinds of minds are possible, the odds that the optimal architecture for supporting self-optimization for a given degree of intelligence happens to be something approximately human seem pretty low.
On the other hand, is there any way to think about the odds of humans inventing a program capable of self-optimization which doesn’t resemble a human mind?
I think if I had a better grasp of whether and why I think humans are (aren’t) capable of building self-optimizing systems at all, I would have a better grasp of the odds of them being of particular types.
(nods) Yeah, agreed.
I would take it further, though. Given that radically different kinds of minds are possible, the odds that the optimal architecture for supporting self-optimization for a given degree of intelligence happens to be something approximately human seem pretty low.
On the other hand, is there any way to think about the odds of humans inventing a program capable of self-optimization which doesn’t resemble a human mind?
I’m not sure.
I think if I had a better grasp of whether and why I think humans are (aren’t) capable of building self-optimizing systems at all, I would have a better grasp of the odds of them being of particular types.