Given human researchers of constant speed, computing speeds double every 18 months.
Human researchers, using top-of-the-line computers as assistants. I get the impression this matters more for chip design than litho-tool design, but it definitely helps for those too.
Humans have around four times the brain volume of chimpanzees, but the difference between us is probably mostly software algorithms.
Is ‘software algorithms’ the right phrase? I’d characterize the improvements more as firmware or hardware improvements. [edit] Later you use the phrase “cognitive algorithms,” which I’m much happier with.
A more concrete example you can use to replace the handwaving: one of the big programming productivity boosters is a second monitor, which seems directly related to low human working memory. It’s easy to imagine minds with superior working memory able to handle much more complicated models and tasks. (We indeed seem to see this diversity among humans.)
In particular, your later arguments on serial causal depth seem like they would benefit from explicitly considering working memory as well as speed.
Any lab that shuts down overnight so its researchers can sleep must be limited by serial cause and effect in researcher brains more than serial cause and effect in instruments- researchers who could work without sleep would correspondingly speed up the lab.
I don’t know about you, but I do research in my sleep, and my lab never shuts off our computers because we often have optimization processes running overnight (on every computer in the lab).
It is the case that most of the cycle time in research is mostly due to the human researchers rather than the computer speed (each month on average there might be about a week that’s code-limited rather than human-limited), but this example as you present it is unconvincing.
It’s easy to imagine minds with superior working memory able to handle much more complicated models and tasks. [..] In particular, your later arguments on serial causal depth seem like they would benefit from explicitly considering working memory
Strong, albeit anecdotal, agreement.
Working memory capacity was a large part of what my stroke damaged, and in colloquial terms I was just stupid, relatively speaking, until that healed/retrained. I was fine when dealing with simple problems, but add even a second level of indirection and I just wasn’t able to track. The effect is at least subjectively highly nonlinear.
Incidentally, I think this is the strongest argument against Egan’s General Intelligence Theorem (or, alternatively, Deutsch’s “Universal Explainer” argument from The Beginning of Infinity). Yes, humans could in theory come up with arbitrarily complex causal models, and that’s sufficient to understand an arbitrarily complex causal system, but in practice, unaided humans are limited to rather simple models. Yes, we’re very good at making use of aids (I’m reminded of how much writing helps thinking whenever I try to do a complicated calculation in my head), but those limitations represent a plausible way for meaningful superhuman intelligence to be possible.
I hope never to forget the glorious experience of re-inventing the concept of lists, about two weeks into my recovery. I suddenly became indescribably smarter.
In the same vein, I have been patiently awaiting the development of artificial working-memory cognitive buffers. As you say, for practical purposes this is superhuman intelligence.
The third tipping point was the appearance of technology capable of accumulating and manipulating vast amounts of information outside humans, thus removing them as bottlenecks to a seemingly self-perpetuating process of knowledge explosion.
Comments:
Human researchers, using top-of-the-line computers as assistants. I get the impression this matters more for chip design than litho-tool design, but it definitely helps for those too.
Is ‘software algorithms’ the right phrase? I’d characterize the improvements more as firmware or hardware improvements. [edit] Later you use the phrase “cognitive algorithms,” which I’m much happier with.
A more concrete example you can use to replace the handwaving: one of the big programming productivity boosters is a second monitor, which seems directly related to low human working memory. It’s easy to imagine minds with superior working memory able to handle much more complicated models and tasks. (We indeed seem to see this diversity among humans.)
In particular, your later arguments on serial causal depth seem like they would benefit from explicitly considering working memory as well as speed.
I don’t know about you, but I do research in my sleep, and my lab never shuts off our computers because we often have optimization processes running overnight (on every computer in the lab).
It is the case that most of the cycle time in research is mostly due to the human researchers rather than the computer speed (each month on average there might be about a week that’s code-limited rather than human-limited), but this example as you present it is unconvincing.
Strong, albeit anecdotal, agreement.
Working memory capacity was a large part of what my stroke damaged, and in colloquial terms I was just stupid, relatively speaking, until that healed/retrained. I was fine when dealing with simple problems, but add even a second level of indirection and I just wasn’t able to track. The effect is at least subjectively highly nonlinear.
Incidentally, I think this is the strongest argument against Egan’s General Intelligence Theorem (or, alternatively, Deutsch’s “Universal Explainer” argument from The Beginning of Infinity). Yes, humans could in theory come up with arbitrarily complex causal models, and that’s sufficient to understand an arbitrarily complex causal system, but in practice, unaided humans are limited to rather simple models. Yes, we’re very good at making use of aids (I’m reminded of how much writing helps thinking whenever I try to do a complicated calculation in my head), but those limitations represent a plausible way for meaningful superhuman intelligence to be possible.
I hope never to forget the glorious experience of re-inventing the concept of lists, about two weeks into my recovery. I suddenly became indescribably smarter.
In the same vein, I have been patiently awaiting the development of artificial working-memory cognitive buffers. As you say, for practical purposes this is superhuman intelligence.
Gaaah. I hate brain damage.
Congratulations on your discovery, anyway.
Yeah, you and me both, brother.
Indeed. For me, that was the most glaring conceptual problem. That and attempting to predict the course of evolution with minimal reference to evolutionary theory. There is a literature on how cultural systems evolve. For a specific instance see this: