I think the hypothetical %-based approach you mention isn’t a good approach. The issue is that you do indeed get exponentially diminishing returns from improvement to any one question, but then there are other questions of higher “difficulty” which you tend to start improving on, once you saturate your current level of difficulty.
Within psychometrics, this is formally studied under the phrase “item response theory”. There, they fit sigmoidal curves to responses to surveys or tests, and as one probability curve flattens out, another curve tends to pick up steam. See e.g. this example (via):
Exponentially diminishing returns was what I found in the concrete examples I thought of (e.g. offering insurance policies, betting on events, etc.).
It seems to me that an AI that linearly increased its predictive accuracy on a particular topic would see exponentially diminishing returns.
The question is if this return on investment of predictive accuracy generalises.
If I were to instead suppose that the agent in question was well calibrated, and got any binary question it could assign 90% accuracy to it or its inverse.
If the accuracy was raised to 99%.
Then 99.9%.
Then 99.99%.
Then 99.999%
…
Is there a strategy that allows the agent to make consistent linear returns across each step.
And how many scenarios are there where such a strategy is available (vs just making exponentially diminishing returns as seems to be the default).
Your answer seems interesting as a response to how intelligence manifests among humans in the real world (I up voted). But it’s leaving my toy model for thinking about how to model the capability an AI could purchase with increasing predictive accuracy (one measure of intelligence).
Maybe I could make a new question and clarify more strongly what exactly I’m trying to investigate and think about.
My point is that as predictive accuracy for one question goes from 99.9% to 99.99%, predictive accuracy for another question might be going from 0.1% to 10%. So one shouldn’t focus on the 99.9% questions increasing (well, sometimes one should, e.g. for self-driving cars where extremely high reliability is important), but instead on whether there is a big supply of other questions with an accuracy close to 0 that one has space for improving on.
For some things (like survival across repeated trials), 99.99% is indeed immensely better than 99%. There are quite a few types of scenario where intelligence does make that sort of difference. Also, 0.1% vs 10% can also make a huge difference, e.g. in the odds of successfully creating something very much more valuable than usual.
Similar odds ratios also get you from 1% to 99%, which I think is extremely valuable in almost all situations where one outcome is substantially better than the other.
The original inquiry was about returns on cognitive performance. One source of return is the sort of thing you’re talking about here: moving from 99.9999% to 99.999999% accuracy.
A different, but overall much more valuable source of return is increasing the scope of things for which you move from 1% to 99%. It’s much more valuable because for any practically possible level of cognitive capability, there are a lot more things at the low end of predictability than at the high end.
If you want to restrict the discussion only to the first class though, that’s fine.
I think the hypothetical %-based approach you mention isn’t a good approach. The issue is that you do indeed get exponentially diminishing returns from improvement to any one question, but then there are other questions of higher “difficulty” which you tend to start improving on, once you saturate your current level of difficulty.
Within psychometrics, this is formally studied under the phrase “item response theory”. There, they fit sigmoidal curves to responses to surveys or tests, and as one probability curve flattens out, another curve tends to pick up steam. See e.g. this example (via):
Exponentially diminishing returns was what I found in the concrete examples I thought of (e.g. offering insurance policies, betting on events, etc.).
It seems to me that an AI that linearly increased its predictive accuracy on a particular topic would see exponentially diminishing returns.
The question is if this return on investment of predictive accuracy generalises.
If I were to instead suppose that the agent in question was well calibrated, and got any binary question it could assign 90% accuracy to it or its inverse.
If the accuracy was raised to 99%.
Then 99.9%. Then 99.99%. Then 99.999% …
Is there a strategy that allows the agent to make consistent linear returns across each step.
And how many scenarios are there where such a strategy is available (vs just making exponentially diminishing returns as seems to be the default).
Your answer seems interesting as a response to how intelligence manifests among humans in the real world (I up voted). But it’s leaving my toy model for thinking about how to model the capability an AI could purchase with increasing predictive accuracy (one measure of intelligence).
Maybe I could make a new question and clarify more strongly what exactly I’m trying to investigate and think about.
My point is that as predictive accuracy for one question goes from 99.9% to 99.99%, predictive accuracy for another question might be going from 0.1% to 10%. So one shouldn’t focus on the 99.9% questions increasing (well, sometimes one should, e.g. for self-driving cars where extremely high reliability is important), but instead on whether there is a big supply of other questions with an accuracy close to 0 that one has space for improving on.
For some things (like survival across repeated trials), 99.99% is indeed immensely better than 99%. There are quite a few types of scenario where intelligence does make that sort of difference. Also, 0.1% vs 10% can also make a huge difference, e.g. in the odds of successfully creating something very much more valuable than usual.
Similar odds ratios also get you from 1% to 99%, which I think is extremely valuable in almost all situations where one outcome is substantially better than the other.
The inquiry is about sustained linear returns to increases in predictive accuracy.
It’s not enough to show that a jump from 99% predictive accuracy to 99.99% is good.
You have to show that a jump from 99% accuracy to 99.99% is as good as a jump from 99.99% to 99.9999% accuracy.
You aren’t properly engaging with the inquiry I posited.
The original inquiry was about returns on cognitive performance. One source of return is the sort of thing you’re talking about here: moving from 99.9999% to 99.999999% accuracy.
A different, but overall much more valuable source of return is increasing the scope of things for which you move from 1% to 99%. It’s much more valuable because for any practically possible level of cognitive capability, there are a lot more things at the low end of predictability than at the high end.
If you want to restrict the discussion only to the first class though, that’s fine.