I feel like this is a bit incorrect. There are imaginable things that are smarter than humans at some tasks, smart as average humans at others, thus overall superhuman, yet controllable and therefore possible to integrate in an economy without immediately exploding into an utopian (or dystopian) singularity. The question is whether we are liable to build such things before we build the exploding singularity kind, or if the latter is in some sense easier to build and thus stumble upon first. Most AI optimists think these limited and controllable intelligences are the default natural outcome of our current trajectory and thus expect mere boosts in productivity.
There are imaginable things that are smarter than humans at some tasks, smart as average humans at others, thus overall superhuman, yet controllable and therefore possible to integrate in an economy
sure, e.g. i think (<- i may be wrong about what the average human can do) that GPT-4 meets this definition (far superhuman at predicting author characteristics, above-average-human at most other abstract things). that’s a totally different meaning.
Most AI optimists think these limited and controllable intelligences are the default natural outcome of our current trajectory and thus expect mere boosts in productivity.
do you mean they believe superintelligence (the singularity-creating kind) is impossible, and so don’t also expect it to come after? it’s not sufficient for less capable AIs to defaultly come before superintelligence.
I think some believe it’s downright impossible and others that we’ll just never create it because we have no use for something so smart it overrides our orders and wishes. That at most we’ll make a sort of magical genie still bound by us expressing our wishes.
I feel like this is a bit incorrect. There are imaginable things that are smarter than humans at some tasks, smart as average humans at others, thus overall superhuman, yet controllable and therefore possible to integrate in an economy without immediately exploding into an utopian (or dystopian) singularity. The question is whether we are liable to build such things before we build the exploding singularity kind, or if the latter is in some sense easier to build and thus stumble upon first. Most AI optimists think these limited and controllable intelligences are the default natural outcome of our current trajectory and thus expect mere boosts in productivity.
sure, e.g. i think (<- i may be wrong about what the average human can do) that GPT-4 meets this definition (far superhuman at predicting author characteristics, above-average-human at most other abstract things). that’s a totally different meaning.
do you mean they believe superintelligence (the singularity-creating kind) is impossible, and so don’t also expect it to come after? it’s not sufficient for less capable AIs to defaultly come before superintelligence.
I think some believe it’s downright impossible and others that we’ll just never create it because we have no use for something so smart it overrides our orders and wishes. That at most we’ll make a sort of magical genie still bound by us expressing our wishes.