A very thought-provoking and well-written article. Thanks!
Your biggest conceptual jump seems to be reasoning about the subjective experience of hyperintelligences by analogy to human experiences. That is, and experience of some thought/communication speed ratio for a hyperintelligence would be “like” a human experience of that same ratio. But hyperintelligences aren’t just faster. I think they’d probably be very very different qualitatively. Who knows if the costs / benefits of time-consuming communication will be perceived in similar or even recognizable ways?
jacob_cannell has gone on record as anticipating that strong AI will actually be designed by circuit simulation of the human brain. This explains why so many of his posts and comments have such a tendency to anthropomorphize AI, and also, I think, why they tend to be heavy on the interesting ideas, light on the realistic scenarios.
jacob_cannell has gone on record as anticipating that strong AI will actually be designed by circuit simulation of the human brain
I did? I don’t think early strong AI will be an exact circuit simulation of the brain, although I do think it will employ many of the principles.
However, using the brain’s circuit as an example is useful for future modelling. If blind evolution could produce that particular circuit which uses a certain number of components to perform those kinds of thoughts using a certain number of cycles, we should eventually be able to do the same work using similar or less components and similar or less cycles.
It would probably have been fairer if I’d said “approximate simulation.” But if we actually had a sufficient reductionist understanding of the brain and how it gives rise to a unified mind architecture to create an approximate simulation which is smarter than we are and safe, we wouldn’t need to create an approximation of the human brain at all, and it would almost certainly not be even close to the best approach we could take to creating an optimally friendly AI. When it comes to rational minds which use their intelligence efficiently to increase utility in an altruistic manner, anything like the human brain is a lousy thing to settle for.
A very thought-provoking and well-written article. Thanks!
Thanks, I think the time dilation issue is not typically considered in visions of future AGI society and could prove to be a powerful constraint.
That is, and experience of some thought/communication speed ratio for a hyperintelligence would be “like” a human experience of that same ratio
But hyperintelligences aren’t just faster. I think they’d probably be very very different qualitatively. Who knows if the costs / benefits of time-consuming communication will be perceived in similar or even recognizable ways?
I agree they will probably think differently, if not immediately then eventually as the space of mind architectures is explored.
Still we can analyze the delay factor from an abstract computational point of view and reach some conclusions without getting into specific qualitative features of what certain types of thought are “like”.
I find it hard to estimate likelihoods of different types of qualitative divergences from human-like mind architectures.
On the one hand we have the example of early cells such as bacteria which radiated into a massive array of specialized forms, but life is all built around variations of a few old general designs for cells. So are human minds like that? Is that the right analogy?
On the other hand we can see human brain architecture as just one particular point in a vast space of possibility.
A very thought-provoking and well-written article. Thanks!
Your biggest conceptual jump seems to be reasoning about the subjective experience of hyperintelligences by analogy to human experiences. That is, and experience of some thought/communication speed ratio for a hyperintelligence would be “like” a human experience of that same ratio. But hyperintelligences aren’t just faster. I think they’d probably be very very different qualitatively. Who knows if the costs / benefits of time-consuming communication will be perceived in similar or even recognizable ways?
jacob_cannell has gone on record as anticipating that strong AI will actually be designed by circuit simulation of the human brain. This explains why so many of his posts and comments have such a tendency to anthropomorphize AI, and also, I think, why they tend to be heavy on the interesting ideas, light on the realistic scenarios.
I did? I don’t think early strong AI will be an exact circuit simulation of the brain, although I do think it will employ many of the principles.
However, using the brain’s circuit as an example is useful for future modelling. If blind evolution could produce that particular circuit which uses a certain number of components to perform those kinds of thoughts using a certain number of cycles, we should eventually be able to do the same work using similar or less components and similar or less cycles.
It would probably have been fairer if I’d said “approximate simulation.” But if we actually had a sufficient reductionist understanding of the brain and how it gives rise to a unified mind architecture to create an approximate simulation which is smarter than we are and safe, we wouldn’t need to create an approximation of the human brain at all, and it would almost certainly not be even close to the best approach we could take to creating an optimally friendly AI. When it comes to rational minds which use their intelligence efficiently to increase utility in an altruistic manner, anything like the human brain is a lousy thing to settle for.
Thanks, I think the time dilation issue is not typically considered in visions of future AGI society and could prove to be a powerful constraint.
I agree they will probably think differently, if not immediately then eventually as the space of mind architectures is explored.
Still we can analyze the delay factor from an abstract computational point of view and reach some conclusions without getting into specific qualitative features of what certain types of thought are “like”.
I find it hard to estimate likelihoods of different types of qualitative divergences from human-like mind architectures.
On the one hand we have the example of early cells such as bacteria which radiated into a massive array of specialized forms, but life is all built around variations of a few old general designs for cells. So are human minds like that? Is that the right analogy?
On the other hand we can see human brain architecture as just one particular point in a vast space of possibility.