Many humans are not even capable of handling the complexity of the brain of a worm.
I don’t think that’s the right reference class. We’re not asking is something is sufficient, but if something is likely.
Our “irrationality” and the patchwork-architecture of the human brain might constitute an actual feature. The noisiness and patchwork architecture of the human brain might play a significant role in the discovery of unknown unknowns because it allows us to become distracted, to leave the path of evidence based exploration...The noisiness of the human brain might be one of the important features that allows it to exhibit general intelligence.
If you can figure this out, and a superintelligent AI couldn’t assign it the probability it deserves and investigate and experiment with it, does that make you supersuperintelligent?
Also, isn’t the random noise hypothesis being privileged here? Likewise for “our tendency to be biased and act irrationally might partly be a trade off between plasticity, efficiency and the necessity of goal-stability.”
But that expert systems are better at certain tasks does not imply that you can effectively and efficiently combine them into a coherent agency.
Why do these properties of expert systems matter, as no one is discussing combining them?
Why haven’t we seen a learning algorithm teaching itself chess intelligence starting with nothing but the rules?
It is claimed that an artificial general intelligence might wipe us out inadvertently
“Inadvertently” gives the wrong connotations.
For an AGI, that was designed to design paperclips, to pose an existential risk, its creators would have to be capable enough to enable it to take over the universe on its own, yet forget, or fail to, define time, space and energy bounds as part of its optimization parameters
What if the AI changed some of its parameters?
It would appear that we have reached the limits of what it is possible to achieve with computer technology, although one should be careful with such statements, as they tend to sound pretty silly in 5 years
I don’t think that’s the right reference class. We’re not asking is something is sufficient, but if something is likely.
If you can figure this out, and a superintelligent AI couldn’t assign it the probability it deserves and investigate and experiment with it, does that make you supersuperintelligent?
Also, isn’t the random noise hypothesis being privileged here? Likewise for “our tendency to be biased and act irrationally might partly be a trade off between plasticity, efficiency and the necessity of goal-stability.”
Why do these properties of expert systems matter, as no one is discussing combining them?
There’s progress along these lines.
“Inadvertently” gives the wrong connotations.
What if the AI changed some of its parameters?
--John von Neumann