jacob_cannell has gone on record as anticipating that strong AI will actually be designed by circuit simulation of the human brain. This explains why so many of his posts and comments have such a tendency to anthropomorphize AI, and also, I think, why they tend to be heavy on the interesting ideas, light on the realistic scenarios.
jacob_cannell has gone on record as anticipating that strong AI will actually be designed by circuit simulation of the human brain
I did? I don’t think early strong AI will be an exact circuit simulation of the brain, although I do think it will employ many of the principles.
However, using the brain’s circuit as an example is useful for future modelling. If blind evolution could produce that particular circuit which uses a certain number of components to perform those kinds of thoughts using a certain number of cycles, we should eventually be able to do the same work using similar or less components and similar or less cycles.
It would probably have been fairer if I’d said “approximate simulation.” But if we actually had a sufficient reductionist understanding of the brain and how it gives rise to a unified mind architecture to create an approximate simulation which is smarter than we are and safe, we wouldn’t need to create an approximation of the human brain at all, and it would almost certainly not be even close to the best approach we could take to creating an optimally friendly AI. When it comes to rational minds which use their intelligence efficiently to increase utility in an altruistic manner, anything like the human brain is a lousy thing to settle for.
jacob_cannell has gone on record as anticipating that strong AI will actually be designed by circuit simulation of the human brain. This explains why so many of his posts and comments have such a tendency to anthropomorphize AI, and also, I think, why they tend to be heavy on the interesting ideas, light on the realistic scenarios.
I did? I don’t think early strong AI will be an exact circuit simulation of the brain, although I do think it will employ many of the principles.
However, using the brain’s circuit as an example is useful for future modelling. If blind evolution could produce that particular circuit which uses a certain number of components to perform those kinds of thoughts using a certain number of cycles, we should eventually be able to do the same work using similar or less components and similar or less cycles.
It would probably have been fairer if I’d said “approximate simulation.” But if we actually had a sufficient reductionist understanding of the brain and how it gives rise to a unified mind architecture to create an approximate simulation which is smarter than we are and safe, we wouldn’t need to create an approximation of the human brain at all, and it would almost certainly not be even close to the best approach we could take to creating an optimally friendly AI. When it comes to rational minds which use their intelligence efficiently to increase utility in an altruistic manner, anything like the human brain is a lousy thing to settle for.