Here’s where I think the “doomers vs accelerationists” crux can collapse to.
On real computers built by humans, using real noisy data accessible to humans,
(1) how powerful in utility terms will an ASI be
(2) what will that ASI’s advantage over carefully constrained, stateless ASIs be, that humans have on their side, who are unable to tell if their inputs come from the training set or if they are currently operating in the real world.
The crux in (1) comes from the current empirical observations of power laws, and just thinking about what intelligence is. It’s not magic, as an agent in the real world, intelligence is just a Policy between inputs and outputs, with policy updates as part of the cycle.
Obviously the policy cannot operate on more bits of precision than the inputs. Obviously it can’t emit more bits of precision than the actuator output resolution. This has real world consequences, see https://www.lesswrong.com/posts/qpgkttrxkvGrH9BRr/superintelligence-is-not-omniscience . And possibly the policy quality improves by the log of compute, and on an increasing number of problems, there is zero benefit to a smarter policy.
For example, on many medical questions, current human knowledge is so noisy and unreliable that the best policy known is a decision tree. The game ‘tic tac toe’ can be solved by a trivial policy, and an ASI will have no advantage on it. Intelligence doesn’t give a benefit above a base level on an increasing set of problems that scales with the amount of intelligence an agent has.
This is the same principle as Amdahl’s law, “the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used”.
So if “improved part” means “above human intelligence”, Amdahl’s law applies.
The crux in (2) falls from 1. If intelligence has diminishing returns, then you can gain a large fraction of the benefits of increased intelligence with a system substantially stupider than the smartest one you can possibly build.
More empirical data can answer who’s right, and assuming the accelerationists are correct, they will know they were correct for years. If the doomers were correct, well.
Here’s where I think the “doomers vs accelerationists” crux can collapse to.
On real computers built by humans, using real noisy data accessible to humans,
(1) how powerful in utility terms will an ASI be
(2) what will that ASI’s advantage over carefully constrained, stateless ASIs be, that humans have on their side, who are unable to tell if their inputs come from the training set or if they are currently operating in the real world.
The crux in (1) comes from the current empirical observations of power laws, and just thinking about what intelligence is. It’s not magic, as an agent in the real world, intelligence is just a Policy between inputs and outputs, with policy updates as part of the cycle.
Obviously the policy cannot operate on more bits of precision than the inputs. Obviously it can’t emit more bits of precision than the actuator output resolution. This has real world consequences, see https://www.lesswrong.com/posts/qpgkttrxkvGrH9BRr/superintelligence-is-not-omniscience . And possibly the policy quality improves by the log of compute, and on an increasing number of problems, there is zero benefit to a smarter policy.
For example, on many medical questions, current human knowledge is so noisy and unreliable that the best policy known is a decision tree. The game ‘tic tac toe’ can be solved by a trivial policy, and an ASI will have no advantage on it. Intelligence doesn’t give a benefit above a base level on an increasing set of problems that scales with the amount of intelligence an agent has.
This is the same principle as Amdahl’s law, “the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used”.
So if “improved part” means “above human intelligence”, Amdahl’s law applies.
The crux in (2) falls from 1. If intelligence has diminishing returns, then you can gain a large fraction of the benefits of increased intelligence with a system substantially stupider than the smartest one you can possibly build.
More empirical data can answer who’s right, and assuming the accelerationists are correct, they will know they were correct for years. If the doomers were correct, well.