What led you to believe that the space of possible outcomes where an AI consumes all resources (including humans) is larger than the number of outcomes where the AI doesn’t? For some reason(s) you seem to assume that the unbounded incentive to foom and consume the universe comes naturally to any constructed intelligence but any other incentive is very difficult to be implemented. What I see is a much larger number of outcomes where an intelligence does nothing without some hardcoded or evolved incentive. Crude machines do things because that’s all they can do, the number of different ways for them to behave is very limited. Intelligent machines however have high degrees of freedom to behave (pathways to follow) and with this freedom comes choice and choice needs volition, it needs incentive, the urge to follow one way but not another. You seem to assume that somehow the will to foom and consume is given, does not have to be carefully and deliberately hardcoded or evolved, yet the will to constrain itself to given parameters is really hard to achieve. I just don’t think that this premise is reasonable and it is what you base all your arguments on.
I suspect the difference in opinions here is based on different answers to the question of whether the AI should be assumed to be a recursive self-improver.
What led you to believe that the space of possible outcomes where an AI consumes all resources (including humans) is larger than the number of outcomes where the AI doesn’t? For some reason(s) you seem to assume that the unbounded incentive to foom and consume the universe comes naturally to any constructed intelligence but any other incentive is very difficult to be implemented. What I see is a much larger number of outcomes where an intelligence does nothing without some hardcoded or evolved incentive. Crude machines do things because that’s all they can do, the number of different ways for them to behave is very limited. Intelligent machines however have high degrees of freedom to behave (pathways to follow) and with this freedom comes choice and choice needs volition, it needs incentive, the urge to follow one way but not another. You seem to assume that somehow the will to foom and consume is given, does not have to be carefully and deliberately hardcoded or evolved, yet the will to constrain itself to given parameters is really hard to achieve. I just don’t think that this premise is reasonable and it is what you base all your arguments on.
Have you read The Basic AI Drives?
I suspect the difference in opinions here is based on different answers to the question of whether the AI should be assumed to be a recursive self-improver.