I like the simple and clear model and I think discussions about AI risk are vastly improved by people proposing models like this.
I would like to see this model extended by including the productive capacity of the other agents in the AI’s utility function. In other words, the other agents have a comparative advantage over the AI in producing some stuff and the AI may be able to get a higher-utility bundle overall by not killing everyone (or even increasing the productivity of the other agents so they can produce more stuff for the AI to consume).
I like the simple and clear model and I think discussions about AI risk are vastly improved by people proposing models like this.
I would like to see this model extended by including the productive capacity of the other agents in the AI’s utility function. In other words, the other agents have a comparative advantage over the AI in producing some stuff and the AI may be able to get a higher-utility bundle overall by not killing everyone (or even increasing the productivity of the other agents so they can produce more stuff for the AI to consume).