… I think there’s a limit to this process of Copernican dethronement: I believe that humans have already crossed a threshold that, in a certain sense, puts us on an equal footing with any other being who has mastered abstract reasoning. There’s a notion in computing science of “Turing completeness”, which says that once a computer can perform a set of quite basic operations, it can be programmed to do absolutely any calculation that any other computer can do. Other computers might be faster, or have more memory, or have multiple processors running at the same time, but my 1988 Amiga 500 really could be programmed to do anything my 2008 iMac can do — apart from responding to external events in real time — if only I had the patience to sit and swap floppy disks all day long. I suspect that something broadly similar applies to minds and the class of things they can understand: other beings might think faster than us, or have easy access to a greater store of facts, but underlying both mental processes will be the same basic set of general-purpose tools. So if we ever did encounter those billion-year-old aliens, I’m sure they’d have plenty to tell us that we didn’t yet know — but given enough patience, and a very large notebook, I believe we’d still be able to come to grips with whatever they had to say.
What’s really cool about all this is that I just have to wait and see.
apart from responding to external events in real time
The concept of “real time” seems like a BIG DEAL in terms of intelligence, at least to me.
If aliens come into contact with us, it seems unlikely that they’ll give us a billion years and a giant notebook to come to grips before they try to trade with/invade/exterminate/impregnate/seed with nanotechnology/etc.
Can you come up with problem scenarios that don’t involve interactions with other intelligent agents that have a significant speed advantage or disadvantage?
Sure, you can eat someone’s lunch if you’re faster than them, but I’m not sure what this is supposed to tell me about the nature of intelligence.
When I said that “real time” seems like a big deal, I didn’t mean in terms of the fundamental nature of intelligence; I’m not sure that I even disagree about the whole notebook statement. But given minds of almost exactly the same speed there is huge advantage to things like answering a question first in class, bidding first on a contract, designing and carrying out an experiment fast, etc.
To the point where computation, the one place where we can speed up our thinking, is a gigantic industry that keeps expanding despite paradigm failures and quantum phenomena. People who do things faster are better off in a trade situation, so creating an intelligence that thinks faster would be a huge economic boon.
As for scenarios where speed is necessary that aren’t interactive: if a meteor is heading toward your planet, the faster the timescale of your species’ mind the more “time” you have to prepare for it. That’s the least contrived scenario that I can think of, and it isn’t of huge importance, but that was sort of tangential to my point regardless.
Can you come up with problem scenarios that don’t involve interactions with other intelligent agents that have a significant speed advantage or disadvantage?
Existential risks come to mind—even if you ignore the issue of astronomical waste—as setting a lower bound on how stupid lifeforms like us can afford to be.
(If we were some sort of interstellar gas cloud or something which could only be killed by a nearby supernova or collapse of the vacuum or other really rare phenomena, then maybe it wouldn’t be so bad to take billions of years to develop in the absence of other optimizers.)
What is productive cannot be judged in advance if you are facing unknown unknowns. And the very nature of scientific advances is an evolutionary process, not one of deliberate design but discovery. We may very well be able to speed up certain weak computational problems by sheer brute force but not solve problems by creating an advance problem solving machine.
That we do not ask ourselves what we are trying to achieve is a outstanding feature of our ability to learn and change our habits. If we would all be the productive machines that you might have in mind, then the religious people would stay religious and the bushman would keep striving for a successful chase even after they build a supermarket in front of his village. Our noisy and bloated nature is fundamental to our diversity and ability to discover the unknown unknowns that the highly focused, productive and change adverse autistic mind would never come across or care about. Pigeons outperform humans at the Monty Hall Dilemma because they are less methodical.
What I’m trying to say is that the idea of superhuman intelligence is not as clear as it is portrayed here. Greg Egan may very well be right. That is not to say that once we learnt about a certain problem there isn’t a more effective way to solve it than using the human mind. But I wouldn’t bet my money on the kind of god-like intelligence that is somehow about to bootstrap itself out of the anthropocentric coding it emerged from.
I suspect that if we’re willing to say human minds are Turing Complete[1], then we should also be willing to say that an ant’s mind is Turing Complete. So when imagining a human with a lot of patience and a very large notebook interacting with a billion year old alien, consider an ant with a lot of patience and a very large surface area to record ant-pheromones upon, interacting with a human. Consider how likely it is that human would be interested in telling the ant things it didn’t yet know. Consider what topics the human would focus on telling the ant, and whether it might decide to hold back on some topics because it figures the ant isn’t ready to understand those concepts yet. Consider whether it’s more important for the patience to lie within the ant or within the human.
1: I generally consider human minds to NOT be Turing Complete, because Turing Machines have infinite memory (via their infinite tape), whereas human minds have finite memory (being composed of a finite amount of matter). I guess Egan is working around this via the “very large notebook”, which is why I’ll let this particular nitpick slide for now.
Random thought about Schild’s Ladder (which assumes the same “equal footing” idea), related to the advantages of a constant utility function: Would Tchicaya have destroyed the alien microbes if he’d first simulated how he’d feel about it afterward?
I’ve already got that book, I have to read it soon :-)
Here is more from Greg Egan:
What’s really cool about all this is that I just have to wait and see.
The concept of “real time” seems like a BIG DEAL in terms of intelligence, at least to me.
If aliens come into contact with us, it seems unlikely that they’ll give us a billion years and a giant notebook to come to grips before they try to trade with/invade/exterminate/impregnate/seed with nanotechnology/etc.
Can you come up with problem scenarios that don’t involve interactions with other intelligent agents that have a significant speed advantage or disadvantage?
Sure, you can eat someone’s lunch if you’re faster than them, but I’m not sure what this is supposed to tell me about the nature of intelligence.
When I said that “real time” seems like a big deal, I didn’t mean in terms of the fundamental nature of intelligence; I’m not sure that I even disagree about the whole notebook statement. But given minds of almost exactly the same speed there is huge advantage to things like answering a question first in class, bidding first on a contract, designing and carrying out an experiment fast, etc.
To the point where computation, the one place where we can speed up our thinking, is a gigantic industry that keeps expanding despite paradigm failures and quantum phenomena. People who do things faster are better off in a trade situation, so creating an intelligence that thinks faster would be a huge economic boon.
As for scenarios where speed is necessary that aren’t interactive: if a meteor is heading toward your planet, the faster the timescale of your species’ mind the more “time” you have to prepare for it. That’s the least contrived scenario that I can think of, and it isn’t of huge importance, but that was sort of tangential to my point regardless.
Existential risks come to mind—even if you ignore the issue of astronomical waste—as setting a lower bound on how stupid lifeforms like us can afford to be.
(If we were some sort of interstellar gas cloud or something which could only be killed by a nearby supernova or collapse of the vacuum or other really rare phenomena, then maybe it wouldn’t be so bad to take billions of years to develop in the absence of other optimizers.)
So, what Greg Egan is saying is that the methods of epistemic rationality and creativity are all mostly known by humans, all we lack is memory space.
I sincerely doubt it. I genuinely believe Anna Salamon’s statement , that humans are only on the cusp of general intelligence, is closer to the truth.
EDITED : to add hyperlink to Anna Salamon’s article
What is productive cannot be judged in advance if you are facing unknown unknowns. And the very nature of scientific advances is an evolutionary process, not one of deliberate design but discovery. We may very well be able to speed up certain weak computational problems by sheer brute force but not solve problems by creating an advance problem solving machine.
That we do not ask ourselves what we are trying to achieve is a outstanding feature of our ability to learn and change our habits. If we would all be the productive machines that you might have in mind, then the religious people would stay religious and the bushman would keep striving for a successful chase even after they build a supermarket in front of his village. Our noisy and bloated nature is fundamental to our diversity and ability to discover the unknown unknowns that the highly focused, productive and change adverse autistic mind would never come across or care about. Pigeons outperform humans at the Monty Hall Dilemma because they are less methodical.
What I’m trying to say is that the idea of superhuman intelligence is not as clear as it is portrayed here. Greg Egan may very well be right. That is not to say that once we learnt about a certain problem there isn’t a more effective way to solve it than using the human mind. But I wouldn’t bet my money on the kind of god-like intelligence that is somehow about to bootstrap itself out of the anthropocentric coding it emerged from.
I suspect that if we’re willing to say human minds are Turing Complete[1], then we should also be willing to say that an ant’s mind is Turing Complete. So when imagining a human with a lot of patience and a very large notebook interacting with a billion year old alien, consider an ant with a lot of patience and a very large surface area to record ant-pheromones upon, interacting with a human. Consider how likely it is that human would be interested in telling the ant things it didn’t yet know. Consider what topics the human would focus on telling the ant, and whether it might decide to hold back on some topics because it figures the ant isn’t ready to understand those concepts yet. Consider whether it’s more important for the patience to lie within the ant or within the human.
1: I generally consider human minds to NOT be Turing Complete, because Turing Machines have infinite memory (via their infinite tape), whereas human minds have finite memory (being composed of a finite amount of matter). I guess Egan is working around this via the “very large notebook”, which is why I’ll let this particular nitpick slide for now.
Random thought about Schild’s Ladder (which assumes the same “equal footing” idea), related to the advantages of a constant utility function: Would Tchicaya have destroyed the alien microbes if he’d first simulated how he’d feel about it afterward?