apart from responding to external events in real time
The concept of “real time” seems like a BIG DEAL in terms of intelligence, at least to me.
If aliens come into contact with us, it seems unlikely that they’ll give us a billion years and a giant notebook to come to grips before they try to trade with/invade/exterminate/impregnate/seed with nanotechnology/etc.
Can you come up with problem scenarios that don’t involve interactions with other intelligent agents that have a significant speed advantage or disadvantage?
Sure, you can eat someone’s lunch if you’re faster than them, but I’m not sure what this is supposed to tell me about the nature of intelligence.
When I said that “real time” seems like a big deal, I didn’t mean in terms of the fundamental nature of intelligence; I’m not sure that I even disagree about the whole notebook statement. But given minds of almost exactly the same speed there is huge advantage to things like answering a question first in class, bidding first on a contract, designing and carrying out an experiment fast, etc.
To the point where computation, the one place where we can speed up our thinking, is a gigantic industry that keeps expanding despite paradigm failures and quantum phenomena. People who do things faster are better off in a trade situation, so creating an intelligence that thinks faster would be a huge economic boon.
As for scenarios where speed is necessary that aren’t interactive: if a meteor is heading toward your planet, the faster the timescale of your species’ mind the more “time” you have to prepare for it. That’s the least contrived scenario that I can think of, and it isn’t of huge importance, but that was sort of tangential to my point regardless.
Can you come up with problem scenarios that don’t involve interactions with other intelligent agents that have a significant speed advantage or disadvantage?
Existential risks come to mind—even if you ignore the issue of astronomical waste—as setting a lower bound on how stupid lifeforms like us can afford to be.
(If we were some sort of interstellar gas cloud or something which could only be killed by a nearby supernova or collapse of the vacuum or other really rare phenomena, then maybe it wouldn’t be so bad to take billions of years to develop in the absence of other optimizers.)
The concept of “real time” seems like a BIG DEAL in terms of intelligence, at least to me.
If aliens come into contact with us, it seems unlikely that they’ll give us a billion years and a giant notebook to come to grips before they try to trade with/invade/exterminate/impregnate/seed with nanotechnology/etc.
Can you come up with problem scenarios that don’t involve interactions with other intelligent agents that have a significant speed advantage or disadvantage?
Sure, you can eat someone’s lunch if you’re faster than them, but I’m not sure what this is supposed to tell me about the nature of intelligence.
When I said that “real time” seems like a big deal, I didn’t mean in terms of the fundamental nature of intelligence; I’m not sure that I even disagree about the whole notebook statement. But given minds of almost exactly the same speed there is huge advantage to things like answering a question first in class, bidding first on a contract, designing and carrying out an experiment fast, etc.
To the point where computation, the one place where we can speed up our thinking, is a gigantic industry that keeps expanding despite paradigm failures and quantum phenomena. People who do things faster are better off in a trade situation, so creating an intelligence that thinks faster would be a huge economic boon.
As for scenarios where speed is necessary that aren’t interactive: if a meteor is heading toward your planet, the faster the timescale of your species’ mind the more “time” you have to prepare for it. That’s the least contrived scenario that I can think of, and it isn’t of huge importance, but that was sort of tangential to my point regardless.
Existential risks come to mind—even if you ignore the issue of astronomical waste—as setting a lower bound on how stupid lifeforms like us can afford to be.
(If we were some sort of interstellar gas cloud or something which could only be killed by a nearby supernova or collapse of the vacuum or other really rare phenomena, then maybe it wouldn’t be so bad to take billions of years to develop in the absence of other optimizers.)