By “reliable” I mean it in the same way as we think of it for self-driving cars. A self-driving car that is great 99% of the time and fatally crashes 1% of the time isn’t really “high skill and unreliable”—part of having “skill” in driving is being reliable.
In the same way, I’m not sure I would want to employ an AI software engineer that 99% of the time was great, but 1% of the time had totally weird inexplicable failure modes that you’d never see with a human. It would just be stressful to supervise, to limit its potential harmful impact to the company, etc. So it seems to me that AI’s won’t be given control of lots of things, and therefore won’t be transformative, until that reliability threshold is met.
That is true only in the sense that it would pass the reliability standards we should have, not what we do have.
Let me explain : suppose it’s a robot that assembles the gear assemblies used in other robots. If the robot screws up badly and trashes itself and surrounding equipment 1 percent of the time, it will destroy more than it’s own “cost” (cost not being dollars, but in labor hours by other robots) than it contributes. This robot (software + hardware) package is too unreliable for any use.
Explaining the first paragraph: suppose the robot is profitable to run, but screws up in very dramatic ways. Then it’s reliable enough that we should be using it. But upper management in an old company might fail to adopt the tech.
By “reliable” I mean it in the same way as we think of it for self-driving cars. A self-driving car that is great 99% of the time and fatally crashes 1% of the time isn’t really “high skill and unreliable”—part of having “skill” in driving is being reliable.
In the same way, I’m not sure I would want to employ an AI software engineer that 99% of the time was great, but 1% of the time had totally weird inexplicable failure modes that you’d never see with a human. It would just be stressful to supervise, to limit its potential harmful impact to the company, etc. So it seems to me that AI’s won’t be given control of lots of things, and therefore won’t be transformative, until that reliability threshold is met.
So what if you don’t want to employ it though? The question is when can it employ itself. It doesn’t need to pass our reliability standards for that.
That is true only in the sense that it would pass the reliability standards we should have, not what we do have.
Let me explain : suppose it’s a robot that assembles the gear assemblies used in other robots. If the robot screws up badly and trashes itself and surrounding equipment 1 percent of the time, it will destroy more than it’s own “cost” (cost not being dollars, but in labor hours by other robots) than it contributes. This robot (software + hardware) package is too unreliable for any use.
Explaining the first paragraph: suppose the robot is profitable to run, but screws up in very dramatic ways. Then it’s reliable enough that we should be using it. But upper management in an old company might fail to adopt the tech.