Isn’t it better to consider brain-to-body mass ratios? A lion isn’t 1.5 orders of magnitude smarter than a housecat. I wouldn’t assume that quantity of experience is linear in the number of neurons.
Computer performance in chess (among many other things) scales logarithmically or worse with computer speeds/hardware. Humans with more time and larger collaborating groups also show diminishing returns.
But if we’re talking about reinforcement learning and sensory experience in themselves, we’re not interested in the (sublinear) usefulness of scaling for intelligence, but the number of subsystems undergoing the morally relevant processes. Neurons are still a rough proxy for that (details of the balance of nervous system tissue between functions, energy supply, firing rates, and other issues would matter substantially), but should be far closer to linear.
Isn’t it better to consider brain-to-body mass ratios? A lion isn’t 1.5 orders of magnitude smarter than a housecat. I wouldn’t assume that quantity of experience is linear in the number of neurons.
Computer performance in chess (among many other things) scales logarithmically or worse with computer speeds/hardware. Humans with more time and larger collaborating groups also show diminishing returns.
But if we’re talking about reinforcement learning and sensory experience in themselves, we’re not interested in the (sublinear) usefulness of scaling for intelligence, but the number of subsystems undergoing the morally relevant processes. Neurons are still a rough proxy for that (details of the balance of nervous system tissue between functions, energy supply, firing rates, and other issues would matter substantially), but should be far closer to linear.