I believe that humans have already crossed a threshold that, in a certain sense, puts us on an equal footing with any other being who has mastered abstract reasoning. There’s a notion in computing science of “Turing completeness”, which says that once a computer can perform a set of quite basic operations, it can be programmed to do absolutely any calculation that any other computer can do. Other computers might be faster, or have more memory, or have multiple processors running at the same time, but my 1988 Amiga 500 really could be programmed to do anything my 2008 iMac can do — apart from responding to external events in real time — if only I had the patience to sit and swap floppy disks all day long. I suspect that something broadly similar applies to minds and the class of things they can understand: other beings might think faster than us, or have easy access to a greater store of facts, but underlying both mental processes will be the same basic set of general-purpose tools. So if we ever did encounter those billion-year-old aliens, I’m sure they’d have plenty to tell us that we didn’t yet know — but given enough patience, and a very large notebook, I believe we’d still be able to come to grips with whatever they had to say.
Equivocation. “Who’s ‘we’, flesh man?” Even granting the necessary millions or billions of years for a human to sit down and emulate a superintelligence step by step, it is still not the human who understands, but the Chinese room.
I’ve seen this quote before and always find it funny because when I read Greg Egan, I constantly find myself thinking there’s no way I could’ve come up with the ideas he has even if you gave me months or years of thinking time.
Yes, there’s something to that, but you have to be careful if you want to use that as an objection. Maybe you wouldn’t easily think of it, but that doesn’t exclude the possibility of you doing it: you can come up with algorithms you can execute which would spit out Egan-like ideas, like ‘emulate Egan’s brain neuron by neuron’. (If nothing else, there’s always the ol’ dovetail-every-possible-Turing-machine hammer.) Most of these run into computational complexity problems, but that’s the escape hatch Egan (and Scott Aaronson has made a similar argument) leaves himself by caveats like ‘given enough patience, and a very large notebook’. Said patience might require billions of years, and the notebook might be the size of the Milky Way galaxy, but those are all finite numbers, so technically Egan is correct as far as that goes.
Yeah good point—given generous enough interpretation of the notebook my rejection doesn’t hold. It’s still hard for me to imagine that response feeling meaningful in the context but maybe I’m just failing to model others well here.
Greg Egan on universality:
Equivocation. “Who’s ‘we’, flesh man?” Even granting the necessary millions or billions of years for a human to sit down and emulate a superintelligence step by step, it is still not the human who understands, but the Chinese room.
I’ve seen this quote before and always find it funny because when I read Greg Egan, I constantly find myself thinking there’s no way I could’ve come up with the ideas he has even if you gave me months or years of thinking time.
Yes, there’s something to that, but you have to be careful if you want to use that as an objection. Maybe you wouldn’t easily think of it, but that doesn’t exclude the possibility of you doing it: you can come up with algorithms you can execute which would spit out Egan-like ideas, like ‘emulate Egan’s brain neuron by neuron’. (If nothing else, there’s always the ol’ dovetail-every-possible-Turing-machine hammer.) Most of these run into computational complexity problems, but that’s the escape hatch Egan (and Scott Aaronson has made a similar argument) leaves himself by caveats like ‘given enough patience, and a very large notebook’. Said patience might require billions of years, and the notebook might be the size of the Milky Way galaxy, but those are all finite numbers, so technically Egan is correct as far as that goes.
Yeah good point—given generous enough interpretation of the notebook my rejection doesn’t hold. It’s still hard for me to imagine that response feeling meaningful in the context but maybe I’m just failing to model others well here.