Present day software is a series of increasing powerful narrow tools and abstractions.
Do you believe that any kind of general intelligence is practically feasible that is not a collection of powerful narrow tools and abstractions? What makes you think so?
Put simply, no software today cares about what you want.
If all I care about is a list of Fibonacci numbers, what is the difference regarding the word “care” between a simple recursive algorithm and a general AI?
Furthermore, your general reasoning process here—define some vague measure of “software doing what you want”, observe an increasing trend line and extrapolate to a future situation—is exactly the kind of reasoning I always try to avoid, because it is usually misleading and heuristic.
My measure of “software doing what you want” is not vague. I mean it quite literally. If I want software to output a series of Fibonacci numbers, and it does output a series of Fibonacci numbers, then it does what I want.
And what other than an increasing trend line do you suggest would be a rational means of extrapolation, sudden jumps and transitions?
Do you believe that any kind of general intelligence is practically feasible that is not a collection of powerful narrow tools and abstractions? What makes you think so?
If all I care about is a list of Fibonacci numbers, what is the difference regarding the word “care” between a simple recursive algorithm and a general AI?
My measure of “software doing what you want” is not vague. I mean it quite literally. If I want software to output a series of Fibonacci numbers, and it does output a series of Fibonacci numbers, then it does what I want.
And what other than an increasing trend line do you suggest would be a rational means of extrapolation, sudden jumps and transitions?