In many discussions I’ve hard with people around AI takeoff, I’ve frequently encountered a belief that intelligence is going to be “easy”.
To quantify what I mean by “easy”, many people seem to expect that the marginal returns on cognitive investment (whether via investment of computational resources, human intellectual labour, cognitive reinvestment or general economic resources) will not be diminishing, or will diminish gracefully (i.e. at a sublinear rate).
(Specifically marginal returns around and immediately beyond the human cognitive capability frontier.)
I find this a bit baffling/absurd honestly. My default intuitions lean towards marginal returns to cognitive reinvestment diminishing at a superlinear rate, and my arm chair philosophising so far (thought experiments and general thinking around the issue) seem to support this intuition.
A couple intuition pumps:
50%, 75%, 87.5%, 93.75%, … are linear jumps in predictive accuracy (one bit each), but returns seem to diminish at an exponential rate
On the other hand 6.25%, 12.5%, 25%, 50% represent the same linear jumps, but this time with returns growing at an exponential returns
This suggests that the nature of returns to cognitive investment might exhibit differing behaviour depending on where in the cognitive capabilities curve you are
Though I’ve not yet thought about how this behaviour generalises to other aspects of cognition separate from predictive accuracy
Apriori, I’d expect that given how much human dominance is almost entirely dependent on our (collective) intelligence, our evolution would have selected strongly for intelligence until it met diminishing returns to higher intelligence
That is, we should expect that evolution made us as smart as was beneficial given its constraints:
Calorie requirements of larger brains
Energy budget
Size of the human birth canal
Slight variations on existing cognitive architecture
Etc.
I think the reported higher incidence of some neuronal issues/disorders among higher IQ human subpopulations (e.g. Ashkenazi Jews) may be an argument in favour of evolution having met said marginal returns
Perhaps the Flynn Effect is an argument against this
Another way to frame my question this is that there will obviously be diminishing marginal returns (perhaps superlinearly diminishing) at some point; why are we confident that point is far in front of the peak of human intelligence?
But if you look at the length of a chain of reasoning you can do while staying under 10% error or something, then returns don’t diminish at all in terms of predictive accuracy. Going from 99% to 99.9% accuracy lets you plan 10 times further ahead in the future with the same final accuracy.
Take for instance the accuracy of humans in classifying daily objects (chairs, pencils, doors, that sort of stuff), for which I’d think we have a greater than 99.9% accuracy, and I’m being conservative here, I don’t misclassify every 1 in 1000 objects I see in daily life. If that accuracy dropped to 99%, you’d make noticeable mistakes pretty often and long tasks would get tricky. If it dropped to 90%, you’d be very severely impaired and I don’t think you could function in society, seems to me like the returns scale with predictive accuracy, not the linear probabilities.
I don’t think humans are anywhere near that accurate. For example, ImageNet was found to have 6% label errors.
If I go to pick up an item on my desk, and I accidentally grab the wrong item, it’s no big deal, I just put it down and grab the right item. I have no idea how often this happens for a typical human, but I bet it’s way more than 0.1%. When I was younger, I was very clumsy and I often spilled my drink. It never impaired my ability to function in society.
Perhaps this is too nitpicky about semantics, but when I tried to evaluate the plausibility of this claim I came into a bit of an impasse.
You’re supposing that there is a single natural category for each object you see (and buries even the definition of an object, which isn’t obvious to me). I’d agree that you probably classify a given thing the same way >99.9% of the time. However, what would the inter-rater reliability be for these classifications? Are those classifications actually “correct” in a natural sense?
I think most people would agree that at some point there is likely to be diminishing returns. I, and I think the prevailing view on lesswrong, is that the biological constraints you mentioned are actually huge constraints that silicon-based intelligence won’t/doesn’t have. And the lack of these constraints will push off the point of diminishing returns to a point much past humans.
If AI hits the wall of diminishing return at say 10x average adult human intelligence, doesn’t that greatly reduce the potential threat?
What does “10x average adult human intelligence” even mean? There are no natural units of intelligence!
I never implied there were natural units of intelligence? This is quite a bizarre thing to say or imply.
Then what does “10x average adult human intelligence” mean?
As written, it pretty clearly implies that intelligence is a scalar quantity, such that you can get a number describing the average adult human, one describing an AI system, and observe that the latter is twice or ten times the former.
I can understand how you’d compare two systems-or-agents on metrics like “solution quality or error rate averaged over a large suite of tasks”, wall-clock latency to accomplish a task, or fully-amortized or marginal cost to accomplish a task. However, deriving a cardinal metric of intelligence from this seems to me to be begging the question, with respect to both the scale factors on each part and more importantly the composition of the suite of tasks you consider.
No? It’s common to see a 10x figure used in connection with many other things that does not imply that intelligence is a scalar quantity.
For example, a 10x software engineer.
Nobody that I know interpret this as literally ’10x more intelligent’ then the average software engineer, it’s understood to mean, ideally, 10x more productive. More often it’s understood as vastly more productive.
(And even if someone explicitly writes ’10x more intelligent software engineer’ it doesn’t mean they are 10x scaler units of intelligence more so. Just that they are noticeably more intelligent, almost certainly with diminishing returns, potentially leading a roughly 10x productivity increase.)
And since it’s a common enough occupation nowadays, especially among the LW crowd, that I would presume the large majority of folks here would also interpret it similarly.
“10x engineer” is naturally measured in dollar-value to the company (or quality+speed proxies on a well-known distribution of job tasks), as compared to the median or modal employee, so I don’t think that’s a good analogy. Except perhaps inasmuch as it’s a deeply disputed and kinda fuzzy concept!
Right like most characteristics of human beings other than the ones subject to exact measurement, intelligence, 10x intelligence, 10x anything, etc., is deeply disputed and fuzzy compared to things like height, finger length, hair length, etc. So?
Money in the account per year is not fuzzy; it is literally a scalar for which the ground truth is literally a number stored in a computer.
Did you reply to the right comment? The last topic discussed was 13d ago on the diminishing returns of intelligence;
Yes.
So what does money in a bank account, in electronic form?, have to do with the diminishing returns of intelligence?
There are free-rider problems, size of the birth canal, tradeoffs with other organs and immune function, caloric limits, and many other things that constrain how smart it is locally optimal for a human to be. Most of these do not apply to AIs in the same way. I don’t think this analogy is fruitful. Also, its medium of computation is very different and superior in many strategically relevant ways.
This is a group selection argument. Though higher intelligence may be optimal for the group it might be more optimal for the individual to spare calories and free-ride on the group’s intelligence.
In general, I think these sorts of arguments should be suborned to empirical observations of what happens when a domain enters the sphere of competence of our AI systems, and reliably what happens is the AIs are way, way better. I just listened to Andrew Price’s podcast, who is an artist, and he was talking with a very gifted professional concept artist who spent 3 days creating an image for the same prompt he gave to the latest version of Midjourney. By the end, he concluded Midjourney’s was better. What took him dozens of hours took it less than a minute.
This is the usual pattern. I suspect once all human activity enters this sphere of competence, human civilization will be similarly humiliated.
Evolution is not a magic genie that just gives us what we want.
If there’s strong evolutionary pressure to select for a given trait, the individuals who score poorly on that trait won’t reproduce. The fact that we see big IQ differences within natives of the same country is a sign that the evolutionary selection for IQ isn’t very strong.
Most mutations reduce the performance of an organism. If you have a mutation that makes it 1% less likely that a person reproduces it takes a lot of time for that mutation to disappear due to natural selection.
Given that we see that IQ generally correlates with other positive metrics. I think it’s plausible that more than half of the IQ difference between natives of the same country is due to such mutations that provide no fitness advantages. If you believe that there’s very strong selection for IQ than you would expect even more of the IQ differences to be driven by constantly new appearing useless mutations.
In such a scenario I would not expect the smartest humans to have no useless mutations at all but just fewer than the average person. As our knowledge about genes and our ability to do gene editing without producing additional errors evolves it’s likely that we will see experiments in growing humans that are smarter than anyone currently alive.
Human brains already consume 15%-20% of the calories which is a lot more than most other mammals. It’s not as easy to raise that amount given that it competes with other uses for energy.
Wikipedia suggests 50,000–150,000 years since human adopted language and thus intelligence became more important. That’s not a lot of time to come up with alternative ways to organize the cortex to be more efficient.
I don’t know that intelligence will be “easy” but it doesn’t seem intuitive to me that evolution has optimized intelligence close to any sort of global maximum. Evolution is effective over deep time but is highly inefficient compared to more intelligent optimization processes like SGD (stochastic gradient descent) and incapable of planning ahead.
Even if we assume that evolution has provided the best possible solution within its constraints, what if we are no longer bounded by those constraints? A computer doesn’t have to adhere to the same limitations as an organic human (and some human limitations are really severe).
If you think (as many do) that there is a simple algorithm for intelligence, then scaling up is just a matter of optimizing the software and hardware and making more of it. And if the AI has the capacity to reproduce itself (either by installing itself on commodity hardware, or by building automated factories, etc) then it could grow from a seed to a superintelligence very quickly.
I don’t know that I’ve seen any good models of compute/algorithmic improvement to future-optimizing power. Predictive accuracy probably isn’t the important and difficult part, though it’s part of it. We really have no examples of superhuman intelligence, and variation among humans is pretty difficult to project from, as is variation among non-human tool-AI models.
The optimists (or pessimists, if unaligned) tend to believe that evolution is optimizing for different things than an AI will, and the diminishing returns on brain IQ are due to competing needs for the biology, which probably won’t apply to artificial beings.
I haven’t heard anyone saying it’s easy, nor fully unbounded once past a threshold. I HAVE heard people saying they expect it will seem easy in retrospect, once it gets past human-level and is on the way to a much higher and scarier equilibrium.
Any continuous differentiable function looks about linear once you zoom in far enough. I think that the intuition here is that the range of human intelligence that we are good at measuring subjectively is so narrow (compared to range of possible AI intelligence) that marginal returns would be roughly linear within / around that range. Yes, it will stop being linear further out, but it will be sufficiently far out to not matter at that point.
I do not share those intuitions/expectations at all.
And I think the human brain is a universal learner. I believe that we are closer to the peak of bounded physical intelligence than we are to an ant.
And I also think evolution already faced significantly diminishing marginal returns on human cognitive enhancement (the increased disease burden of Ashkenazi Jews for example).
I think that using human biology as a guide is misleading. As far as I am aware, for every measurable task where AI surpassed humans, it blew past the human level of capability without slowing down. As far as I am aware, there is a lot of evidence (however indirect) that human-level intelligence is nothing special (as in—more of an arbitrary point on the spectrum) from the AI, or any other non-human-biology-focused perspective, and no evidence to the contrary. Is there?
Two robots are twice as smart as one robot (and cooperation can possibly make it even better). Hence linear returns.
Did I completely misunderstand your question?
Garry Kasparov beat the world team at chess; however, you’re defining “smart”, that’s not how I’m using it.
Ah, I assumed that by “linear returns” you mean, if we can create an IQ 150 robot for $N whether we can get twice as much “IQ 150 thoughts per second” for $2N… which we trivially can, by building two such robots.
If you meant whether we can get an IQ 300 robot for $2N, yeah, that is a completely different question. Probably no. Maybe “first no, and then yes”, if let’s say at IQ 500 we figure out what intelligence truly is and how to convert resources to intelligence more effectively, rather than just running the same algorithms on stronger hardware.