“That limit”? The mathematically ultimate limit is Solomonoff induction on an infinitely powerful computer, but that’s of no physical relevance. I’m talking about the observed bound on rates of progress, including rates of successive removal of bottlenecks. To be sure, there may—hopefully will! -- someday exist entities capable of making much better use of data than we can today; but there is no reason to believe the process of getting to that stage will be in any way discontinuous, and plenty of reason to believe it will not.
Yes, but you were the one who started talking about it as something you can “run into”, together with terms like “as fast as physically possible” and “ideally could from the data”—that last term in particular has previously been used in conversations like this to refer to Solomonoff induction on an infinitely powerful computer.
My point is that at any given moment an awful lot of things will be bottlenecks, including real-world data. The curve of capability is already observed in cases where you are free to optimize whatever variable is the lowest hanging fruit at the moment.
In other words, you are already “into” the current data limit; if you could get better performance by using less data and substituting e.g. more computation, you would already be doing it.
As time goes by, the amount of data you need for a given degree of performance will drop as you obtain more computing power, better algorithms etc. (But of course, better performance still will be obtainable by using more data.) However,
The amount of data needed won’t drop below some lower bound,
More to the point, the rate at which the amount needed drops, is itself bound by the curve of capability.
“That limit”? The mathematically ultimate limit is Solomonoff induction on an infinitely powerful computer, but that’s of no physical relevance. I’m talking about the observed bound on rates of progress, including rates of successive removal of bottlenecks. To be sure, there may—hopefully will! -- someday exist entities capable of making much better use of data than we can today; but there is no reason to believe the process of getting to that stage will be in any way discontinuous, and plenty of reason to believe it will not.
Are you being deliberately obtuse? “That limit” refers to the thing you brought up: the rate at which observations can be made.
Yes, but you were the one who started talking about it as something you can “run into”, together with terms like “as fast as physically possible” and “ideally could from the data”—that last term in particular has previously been used in conversations like this to refer to Solomonoff induction on an infinitely powerful computer.
My point is that at any given moment an awful lot of things will be bottlenecks, including real-world data. The curve of capability is already observed in cases where you are free to optimize whatever variable is the lowest hanging fruit at the moment.
In other words, you are already “into” the current data limit; if you could get better performance by using less data and substituting e.g. more computation, you would already be doing it.
As time goes by, the amount of data you need for a given degree of performance will drop as you obtain more computing power, better algorithms etc. (But of course, better performance still will be obtainable by using more data.) However,
The amount of data needed won’t drop below some lower bound,
More to the point, the rate at which the amount needed drops, is itself bound by the curve of capability.