There are a few major problems with any certainty of the singularity. First, we might be too stupid to create a human level ai. Second, it might not possible, for some reason of which we are currently unaware, to create a human level AI. Third, importantly, we could be too smart.
How would that last one work? Maybe we can push technology to the limits ourselves, and no AI can be smart enough to push it further. We don’t even begin to have enough knowledge to know if this is likely. In other words, maybe it will all be perfectly comprehensible to the us as of now, and therefore not a singularity at all.
Is it worth considering? Of course. Is it worth pursuing? Probably, (we need to wait for hindsight to know better than that), particularly since it will matter a great deal if and when it occurs. We simply can’t assume that it will.
Johnicholas made a good comment I think on the point. What we have (and are) doing is very reminiscent of what Chalmers claims will lead to the singularity. I would go so far as to say that we are a singularity of sorts, beyond which the face of the world could never be the same. Our last century especially, as we went from what would, by analogy, be from the iron age to the beginning of the renaissance, or even further. Cars, Relativity, Quantum Mechanics, planes, radar, microwaves,two world wars, nukes, collapse of colonial system, interstates, computers, massive cold war, countless conflicts and atrocities, entry to and study of space, the internet, and that is just a brief survey, off the top of my head. We’ve had so many, that I’m not sure superhuman AI would be all that difficult to accept, so long as it was super morally speaking as well -which is, of course, not a given.
Any true AI that could not, with 100% accuracy be called friendly, should not exist.
There are a few major problems with any certainty of the singularity. First, we might be too stupid to create a human level ai. Second, it might not possible, for some reason of which we are currently unaware, to create a human level AI. Third, importantly, we could be too smart.
How would that last one work? Maybe we can push technology to the limits ourselves, and no AI can be smart enough to push it further. We don’t even begin to have enough knowledge to know if this is likely. In other words, maybe it will all be perfectly comprehensible to the us as of now, and therefore not a singularity at all.
Is it worth considering? Of course. Is it worth pursuing? Probably, (we need to wait for hindsight to know better than that), particularly since it will matter a great deal if and when it occurs. We simply can’t assume that it will.
Johnicholas made a good comment I think on the point. What we have (and are) doing is very reminiscent of what Chalmers claims will lead to the singularity. I would go so far as to say that we are a singularity of sorts, beyond which the face of the world could never be the same. Our last century especially, as we went from what would, by analogy, be from the iron age to the beginning of the renaissance, or even further. Cars, Relativity, Quantum Mechanics, planes, radar, microwaves,two world wars, nukes, collapse of colonial system, interstates, computers, massive cold war, countless conflicts and atrocities, entry to and study of space, the internet, and that is just a brief survey, off the top of my head. We’ve had so many, that I’m not sure superhuman AI would be all that difficult to accept, so long as it was super morally speaking as well -which is, of course, not a given.
Any true AI that could not, with 100% accuracy be called friendly, should not exist.