I think most people would agree that at some point there is likely to be diminishing returns. I, and I think the prevailing view on lesswrong, is that the biological constraints you mentioned are actually huge constraints that silicon-based intelligence won’t/doesn’t have. And the lack of these constraints will push off the point of diminishing returns to a point much past humans.
Then what does “10x average adult human intelligence” mean?
As written, it pretty clearly implies that intelligence is a scalar quantity, such that you can get a number describing the average adult human, one describing an AI system, and observe that the latter is twice or ten times the former.
I can understand how you’d compare two systems-or-agents on metrics like “solution quality or error rate averaged over a large suite of tasks”, wall-clock latency to accomplish a task, or fully-amortized or marginal cost to accomplish a task. However, deriving a cardinal metric of intelligence from this seems to me to be begging the question, with respect to both the scale factors on each part and more importantly the composition of the suite of tasks you consider.
As written, it pretty clearly implies that intelligence is a scalar quantity,
No? It’s common to see a 10x figure used in connection with many other things that does not imply that intelligence is a scalar quantity.
For example, a 10x software engineer.
Nobody that I know interpret this as literally ’10x more intelligent’ then the average software engineer, it’s understood to mean, ideally, 10x more productive. More often it’s understood as vastly more productive.
(And even if someone explicitly writes ’10x more intelligent software engineer’ it doesn’t mean they are 10x scaler units of intelligence more so. Just that they are noticeably more intelligent, almost certainly with diminishing returns, potentially leading a roughly 10x productivity increase.)
And since it’s a common enough occupation nowadays, especially among the LW crowd, that I would presume the large majority of folks here would also interpret it similarly.
“10x engineer” is naturally measured in dollar-value to the company (or quality+speed proxies on a well-known distribution of job tasks), as compared to the median or modal employee, so I don’t think that’s a good analogy. Except perhaps inasmuch as it’s a deeply disputed and kinda fuzzy concept!
Right like most characteristics of human beings other than the ones subject to exact measurement, intelligence, 10x intelligence, 10x anything, etc., is deeply disputed and fuzzy compared to things like height, finger length, hair length, etc. So?
I think most people would agree that at some point there is likely to be diminishing returns. I, and I think the prevailing view on lesswrong, is that the biological constraints you mentioned are actually huge constraints that silicon-based intelligence won’t/doesn’t have. And the lack of these constraints will push off the point of diminishing returns to a point much past humans.
If AI hits the wall of diminishing return at say 10x average adult human intelligence, doesn’t that greatly reduce the potential threat?
What does “10x average adult human intelligence” even mean? There are no natural units of intelligence!
I never implied there were natural units of intelligence? This is quite a bizarre thing to say or imply.
Then what does “10x average adult human intelligence” mean?
As written, it pretty clearly implies that intelligence is a scalar quantity, such that you can get a number describing the average adult human, one describing an AI system, and observe that the latter is twice or ten times the former.
I can understand how you’d compare two systems-or-agents on metrics like “solution quality or error rate averaged over a large suite of tasks”, wall-clock latency to accomplish a task, or fully-amortized or marginal cost to accomplish a task. However, deriving a cardinal metric of intelligence from this seems to me to be begging the question, with respect to both the scale factors on each part and more importantly the composition of the suite of tasks you consider.
No? It’s common to see a 10x figure used in connection with many other things that does not imply that intelligence is a scalar quantity.
For example, a 10x software engineer.
Nobody that I know interpret this as literally ’10x more intelligent’ then the average software engineer, it’s understood to mean, ideally, 10x more productive. More often it’s understood as vastly more productive.
(And even if someone explicitly writes ’10x more intelligent software engineer’ it doesn’t mean they are 10x scaler units of intelligence more so. Just that they are noticeably more intelligent, almost certainly with diminishing returns, potentially leading a roughly 10x productivity increase.)
And since it’s a common enough occupation nowadays, especially among the LW crowd, that I would presume the large majority of folks here would also interpret it similarly.
“10x engineer” is naturally measured in dollar-value to the company (or quality+speed proxies on a well-known distribution of job tasks), as compared to the median or modal employee, so I don’t think that’s a good analogy. Except perhaps inasmuch as it’s a deeply disputed and kinda fuzzy concept!
Right like most characteristics of human beings other than the ones subject to exact measurement, intelligence, 10x intelligence, 10x anything, etc., is deeply disputed and fuzzy compared to things like height, finger length, hair length, etc. So?
Money in the account per year is not fuzzy; it is literally a scalar for which the ground truth is literally a number stored in a computer.
Did you reply to the right comment? The last topic discussed was 13d ago on the diminishing returns of intelligence;
Yes.
So what does money in a bank account, in electronic form?, have to do with the diminishing returns of intelligence?