I agree with Mike Vassar, that Eliezer is using the word “mind” too broadly, to mean something like “computable function”, rather than a control program for an agent to accomplish goals in the real world.
The real world places a lot of restrictions on possible minds.
If you posit that this mind is autonomous, and not being looked after by some other mind, that places more restrictions on it.
If you posit that there is a society of such minds, evolving over time; or a number of such minds, competing for resources; that places more restrictions on it. By this point, we could say quite a lot about the properties these minds will have. In fact, by this point, it may be the case that variation in possible minds, for sufficiently intelligent AIs, is smaller than the variation in human minds.
I agree with Mike Vassar, that Eliezer is using the word “mind” too broadly, to mean something like “computable function”, rather than a control program for an agent to accomplish goals in the real world.
The real world places a lot of restrictions on possible minds.
If you posit that this mind is autonomous, and not being looked after by some other mind, that places more restrictions on it.
If you posit that there is a society of such minds, evolving over time; or a number of such minds, competing for resources; that places more restrictions on it. By this point, we could say quite a lot about the properties these minds will have. In fact, by this point, it may be the case that variation in possible minds, for sufficiently intelligent AIs, is smaller than the variation in human minds.