The title is “superintelligence is not omniscience”. Then the first paragraph says that we’re talking about the assumption “There is “ample headroom” above humans.” But these are two different things, right? I think there is ample headroom above humans, and I think that superintelligence is not omniscience. I think it’s unhelpful to merge those together into one blog post. I think it’s fine to write a post about “Things that even superintelligent AI can’t do” and it’s fine to write a post “Comparing capabilities between superintelligent AIs & humans / groups-of-humans”, but to me, those seem like they should be two different posts.
For example, as I discussed here, it will eventually be possible to make an AI that can imitate the input-output behavior of 10 trillion unusually smart and conscientious humans, each running at 100× human speed and working together (and in possession of trillions of teleoperable robot bodies spread around the world). That AI will not be omniscient, but it would certainly illustrate that there’s ample headroom.
I will show that individual neurons, small networks of neurons, and in vivo neurons in sense organs can behave chaotically. Each of these can also behave non-chaotically in other circumstances. But we are more interested in the human brain as a whole. Is the brain mostly chaotic or mostly non-chaotic? Does the chaos in the brain amplify uncertainty all the way from the atomic scale to the macroscopic, or is the chain of amplifying uncertainty broken at some non-chaotic mesoscale? How does chaos in the brain actually impact human behavior? Are there some things that brains do for which chaos is essential?
See here. The human brain does certain a-priori-specifiable and a-priori-improbable things, like allowing people to travel to the moon and survive in Antarctica. There has to be some legible reason that they can do those things—presumably it has certain learning algorithms that can systematically pick up on environmental regularities, blah blah. Whatever that legible reason is, I claim we can write computer code that operates on the same principles. I don’t think these algorithms can “rely on chaos” in a sense that can’t be replaced by an RNG, because the whole point of chaos is that it won’t have any predictable useful consequences (beyond the various useful things you can do with an RNG). So if you’re going to make an argument about the difficulty of AGI on this basis, I’m skeptical. (If you’re going to make an argument that you can’t forecast a very specific human’s exact thoughts and behaviors hours and days into the future, then sure, chaos is relevant; but that would be a crazy thing to expect anyway.)
The title is “superintelligence is not omniscience”. Then the first paragraph says that we’re talking about the assumption “There is “ample headroom” above humans.” But these are two different things, right? I think there is ample headroom above humans, and I think that superintelligence is not omniscience. I think it’s unhelpful to merge those together into one blog post. I think it’s fine to write a post about “Things that even superintelligent AI can’t do” and it’s fine to write a post “Comparing capabilities between superintelligent AIs & humans / groups-of-humans”, but to me, those seem like they should be two different posts.
For example, as I discussed here, it will eventually be possible to make an AI that can imitate the input-output behavior of 10 trillion unusually smart and conscientious humans, each running at 100× human speed and working together (and in possession of trillions of teleoperable robot bodies spread around the world). That AI will not be omniscient, but it would certainly illustrate that there’s ample headroom.
See here. The human brain does certain a-priori-specifiable and a-priori-improbable things, like allowing people to travel to the moon and survive in Antarctica. There has to be some legible reason that they can do those things—presumably it has certain learning algorithms that can systematically pick up on environmental regularities, blah blah. Whatever that legible reason is, I claim we can write computer code that operates on the same principles. I don’t think these algorithms can “rely on chaos” in a sense that can’t be replaced by an RNG, because the whole point of chaos is that it won’t have any predictable useful consequences (beyond the various useful things you can do with an RNG). So if you’re going to make an argument about the difficulty of AGI on this basis, I’m skeptical. (If you’re going to make an argument that you can’t forecast a very specific human’s exact thoughts and behaviors hours and days into the future, then sure, chaos is relevant; but that would be a crazy thing to expect anyway.)