Two possibilities have most of the “no agi in 10 years” probability mass for me:
The next gen of AI really starts to scare people, regulation takes off, and AI goes the way of nuclear reactors
Transformer style AI goes the way of self driving cars and turns out to be really hard to get from 99% reliable to the necessary 99.9999% that you need for actual productive work
My take on self-driving taking forever is driving is near AGI complete. Humans drive roughly a million miles between fatal accidents; it would not be particularly surprising if in these million miles (where you are interacting with intelligent agents) you inevitably encounter near AGI-complete problems. Indeed, as the surviving self-driving companies are all moving to end-to-end approaches, self-driving research is begining to resemble AGI research more and more.
Correct. The easiest way to avoid slamming into the sdc bottlenecks would be to carefully deploy AGI to uses where the 1 percent failures won’t cause unacceptable damages.
Any kind of human free environment is like that. Robotic cleaning, shelving, hauling, loading/unloading, manufacturing, mining, farming. Each case is where you close the store and lock the doors, have a separate section of a warehouse for robots, robotic areas of a factory with safety barriers, or robot only mines.
This to me looks like you could automate a significant chunk of the world economy, somewhere between 25-50 percent of it, just improving and scaling and integrating currently demonstrated systems.
You could also use AGI for tutoring, assisting with all the things it already does, as a better voice assistant, for media creation including visualization videos, and so on.
So when it hits the failure cases, when a robotic miner triggers a tunnel collapse, when a robotic cleaner breaks a toilet, when a shelver shoves over piles of goods, when a machine generated video has some porn—all these cases are ones where so long as the cost to fix the damage still makes it net cheaper than humans it’s worth using the AGI.
Over time as the error rate slowly drops you could deploy to more and more uses, start letting humans into the areas with the robots, etc.
This is very different from sdcs where there is this requirement for near perfection before anyone can deploy the cars or make any money.
By “reliable” I mean it in the same way as we think of it for self-driving cars. A self-driving car that is great 99% of the time and fatally crashes 1% of the time isn’t really “high skill and unreliable”—part of having “skill” in driving is being reliable.
In the same way, I’m not sure I would want to employ an AI software engineer that 99% of the time was great, but 1% of the time had totally weird inexplicable failure modes that you’d never see with a human. It would just be stressful to supervise, to limit its potential harmful impact to the company, etc. So it seems to me that AI’s won’t be given control of lots of things, and therefore won’t be transformative, until that reliability threshold is met.
That is true only in the sense that it would pass the reliability standards we should have, not what we do have.
Let me explain : suppose it’s a robot that assembles the gear assemblies used in other robots. If the robot screws up badly and trashes itself and surrounding equipment 1 percent of the time, it will destroy more than it’s own “cost” (cost not being dollars, but in labor hours by other robots) than it contributes. This robot (software + hardware) package is too unreliable for any use.
Explaining the first paragraph: suppose the robot is profitable to run, but screws up in very dramatic ways. Then it’s reliable enough that we should be using it. But upper management in an old company might fail to adopt the tech.
What futures where we don’t get AGI within the next 10 years seem plausible to you?
Two possibilities have most of the “no agi in 10 years” probability mass for me:
The next gen of AI really starts to scare people, regulation takes off, and AI goes the way of nuclear reactors
Transformer style AI goes the way of self driving cars and turns out to be really hard to get from 99% reliable to the necessary 99.9999% that you need for actual productive work
My take on self-driving taking forever is driving is near AGI complete. Humans drive roughly a million miles between fatal accidents; it would not be particularly surprising if in these million miles (where you are interacting with intelligent agents) you inevitably encounter near AGI-complete problems. Indeed, as the surviving self-driving companies are all moving to end-to-end approaches, self-driving research is begining to resemble AGI research more and more.
Why would the latter prevent agi? It would just be both high skill and unreliable, yeah?
Correct. The easiest way to avoid slamming into the sdc bottlenecks would be to carefully deploy AGI to uses where the 1 percent failures won’t cause unacceptable damages.
Any kind of human free environment is like that. Robotic cleaning, shelving, hauling, loading/unloading, manufacturing, mining, farming. Each case is where you close the store and lock the doors, have a separate section of a warehouse for robots, robotic areas of a factory with safety barriers, or robot only mines.
This to me looks like you could automate a significant chunk of the world economy, somewhere between 25-50 percent of it, just improving and scaling and integrating currently demonstrated systems.
You could also use AGI for tutoring, assisting with all the things it already does, as a better voice assistant, for media creation including visualization videos, and so on.
So when it hits the failure cases, when a robotic miner triggers a tunnel collapse, when a robotic cleaner breaks a toilet, when a shelver shoves over piles of goods, when a machine generated video has some porn—all these cases are ones where so long as the cost to fix the damage still makes it net cheaper than humans it’s worth using the AGI.
Over time as the error rate slowly drops you could deploy to more and more uses, start letting humans into the areas with the robots, etc.
This is very different from sdcs where there is this requirement for near perfection before anyone can deploy the cars or make any money.
By “reliable” I mean it in the same way as we think of it for self-driving cars. A self-driving car that is great 99% of the time and fatally crashes 1% of the time isn’t really “high skill and unreliable”—part of having “skill” in driving is being reliable.
In the same way, I’m not sure I would want to employ an AI software engineer that 99% of the time was great, but 1% of the time had totally weird inexplicable failure modes that you’d never see with a human. It would just be stressful to supervise, to limit its potential harmful impact to the company, etc. So it seems to me that AI’s won’t be given control of lots of things, and therefore won’t be transformative, until that reliability threshold is met.
So what if you don’t want to employ it though? The question is when can it employ itself. It doesn’t need to pass our reliability standards for that.
That is true only in the sense that it would pass the reliability standards we should have, not what we do have.
Let me explain : suppose it’s a robot that assembles the gear assemblies used in other robots. If the robot screws up badly and trashes itself and surrounding equipment 1 percent of the time, it will destroy more than it’s own “cost” (cost not being dollars, but in labor hours by other robots) than it contributes. This robot (software + hardware) package is too unreliable for any use.
Explaining the first paragraph: suppose the robot is profitable to run, but screws up in very dramatic ways. Then it’s reliable enough that we should be using it. But upper management in an old company might fail to adopt the tech.