An AGI that lacks volition is incomparably safer. In particular, it is very highly unlikely to render humanity extinct. In addition, absent volition it is possible to prevent an AGI from doing too much harm by moving more slowly and carefully, having breakpoints, having advanced modeled outputs etc.
It will still be dangerous in the sense that many powerful technologies are dangerous, but not uniquely so.
Here is a breakdown of deaths by causes, worldwide, for every year from 1990 to 2019. The overwhelming majority do not involve volition. The first category that does, suicide, accounted for less than 2% of all deaths in each of those years. Homicide was always under 1%. Conflict and Terrorism together are below 1% in every year but one. Alcohol disorders and Drug disorders might be regarded as having volitional causes, but their contribution is similarly insignificant.
So at least 97% of all deaths do not happen because of anyone’s volition. I am not seeing in this an argument for the safety of excluding volition, whatever that is, from a system.
I say “whatever that is,” because while it should be clear what I mean in using the word above about people, it is not clear what it means when applied to an artificial system. We do not have a gears-level model that we can use to impute it or not to any given system.
An AGI that lacks volition is incomparably safer. In particular, it is very highly unlikely to render humanity extinct. In addition, absent volition it is possible to prevent an AGI from doing too much harm by moving more slowly and carefully, having breakpoints, having advanced modeled outputs etc.
It will still be dangerous in the sense that many powerful technologies are dangerous, but not uniquely so.
I think the point of this post is “a powerful enough optimization process kills you (and everyone else) anyway”.
As soon as you give it a command the AI has “volition” in the sense that it is optimizing some output that affects the world.
Here is a breakdown of deaths by causes, worldwide, for every year from 1990 to 2019. The overwhelming majority do not involve volition. The first category that does, suicide, accounted for less than 2% of all deaths in each of those years. Homicide was always under 1%. Conflict and Terrorism together are below 1% in every year but one. Alcohol disorders and Drug disorders might be regarded as having volitional causes, but their contribution is similarly insignificant.
So at least 97% of all deaths do not happen because of anyone’s volition. I am not seeing in this an argument for the safety of excluding volition, whatever that is, from a system.
I say “whatever that is,” because while it should be clear what I mean in using the word above about people, it is not clear what it means when applied to an artificial system. We do not have a gears-level model that we can use to impute it or not to any given system.