That is true only in the sense that it would pass the reliability standards we should have, not what we do have.
Let me explain : suppose it’s a robot that assembles the gear assemblies used in other robots. If the robot screws up badly and trashes itself and surrounding equipment 1 percent of the time, it will destroy more than it’s own “cost” (cost not being dollars, but in labor hours by other robots) than it contributes. This robot (software + hardware) package is too unreliable for any use.
Explaining the first paragraph: suppose the robot is profitable to run, but screws up in very dramatic ways. Then it’s reliable enough that we should be using it. But upper management in an old company might fail to adopt the tech.
So what if you don’t want to employ it though? The question is when can it employ itself. It doesn’t need to pass our reliability standards for that.
That is true only in the sense that it would pass the reliability standards we should have, not what we do have.
Let me explain : suppose it’s a robot that assembles the gear assemblies used in other robots. If the robot screws up badly and trashes itself and surrounding equipment 1 percent of the time, it will destroy more than it’s own “cost” (cost not being dollars, but in labor hours by other robots) than it contributes. This robot (software + hardware) package is too unreliable for any use.
Explaining the first paragraph: suppose the robot is profitable to run, but screws up in very dramatic ways. Then it’s reliable enough that we should be using it. But upper management in an old company might fail to adopt the tech.