Intelligence/IQ is always good, but not a dealbreaker as long as you can substitute it with a larger population.
IMO this is pretty obviously wrong. There are some kinds of problem solving that scales poorly with population, just as there are some computations that scale poorly with parallelisation.
When I said “problems we care about”, I was referring to a cluster of problems that very strongly appear to not scale well with population. Maybe this is an intuitive picture of the cluster of problems I’m referring to.
When I said “problems we care about”, I was referring to a cluster of problems that very strongly appear to not scale well with population. Maybe this is an intuitive picture of the cluster of problems I’m referring to.
I think the problem identified here is in large part a demand problem, in that lots of AI people only wanted AI capabilities, and didn’t care for AI interpretability at all, so once the scaling happened, a lot of the focus went purely to AI scaling.
(Which is an interesting example of Goodhart’s law in action, perhaps.)
IMO this is pretty obviously wrong. There are some kinds of problem solving that scales poorly with population, just as there are some computations that scale poorly with parallelisation.
I definitely agree that there exist such problems where the scaling with population is pretty bad, but I’ll give 2 responses here:
The differences between a human level AI and an actual human are the ability to coordinate and share ontologies better between millions of instances, so the common problems that arise when trying to factorize out problems are greatly reduced.
I think that while there are serial bottlenecks to lots of problem solving in the real world such that it prevents hyperfast outcomes, I don’t think that serial bottlenecks are the dominating factor, because the stuff that is parallelizable like good execution is often far more valuable than the inherently serial computations like deep/original ideas.
IMO this is pretty obviously wrong. There are some kinds of problem solving that scales poorly with population, just as there are some computations that scale poorly with parallelisation.
E.g. project euler problems.
When I said “problems we care about”, I was referring to a cluster of problems that very strongly appear to not scale well with population. Maybe this is an intuitive picture of the cluster of problems I’m referring to.
On this:
I think the problem identified here is in large part a demand problem, in that lots of AI people only wanted AI capabilities, and didn’t care for AI interpretability at all, so once the scaling happened, a lot of the focus went purely to AI scaling.
(Which is an interesting example of Goodhart’s law in action, perhaps.)
See here:
https://www.lesswrong.com/posts/gXinMpNJcXXgSTEpn/ai-craftsmanship#Qm8Kg7PjZoPTyxrr6
I definitely agree that there exist such problems where the scaling with population is pretty bad, but I’ll give 2 responses here:
The differences between a human level AI and an actual human are the ability to coordinate and share ontologies better between millions of instances, so the common problems that arise when trying to factorize out problems are greatly reduced.
I think that while there are serial bottlenecks to lots of problem solving in the real world such that it prevents hyperfast outcomes, I don’t think that serial bottlenecks are the dominating factor, because the stuff that is parallelizable like good execution is often far more valuable than the inherently serial computations like deep/original ideas.