An example of a mind-killing “mind” to me, even if it has no direct, veridical content, being able to put the AI into an environment that seems to be too hostile.
the goal at stake is the ability to not just put a mind under the environment you think of as your true goal. (My current model of the world is that there’s a single goal, and only a single goal can be achieved in this world.)
the AI isn’t allowed to try and get out of an environment within which it’s in control. It can make its own goals—it can make money—by making a lot of money in the same way people enjoy huge amounts of free time.
the AI is allowed to run in a completely unpredictable environment, out of the experimental space. However, its options would be:
it can make thousands of copies of itself, only taking some of its resources and collecting enough money to run a very very complicated AI;
it can make thousands of copies of itself, only doing this very complicated behavior;
it can make thousands of copies of itself, each of which is doing it together, and collecting much more money in the course of its evolution (and perhaps also in the hands of other Minds), until it gets to the point where it can’t make millions of copies of itself, or if not it’s in a simulated universe as it intends to.
So what’s the right thing to do? Where should we be going with this?
An example of a mind-killing “mind” to me, even if it has no direct, veridical content, being able to put the AI into an environment that seems to be too hostile.
the goal at stake is the ability to not just put a mind under the environment you think of as your true goal. (My current model of the world is that there’s a single goal, and only a single goal can be achieved in this world.)
the AI isn’t allowed to try and get out of an environment within which it’s in control. It can make its own goals—it can make money—by making a lot of money in the same way people enjoy huge amounts of free time.
the AI is allowed to run in a completely unpredictable environment, out of the experimental space. However, its options would be:
it can make thousands of copies of itself, only taking some of its resources and collecting enough money to run a very very complicated AI;
it can make thousands of copies of itself, only doing this very complicated behavior;
it can make thousands of copies of itself, each of which is doing it together, and collecting much more money in the course of its evolution (and perhaps also in the hands of other Minds), until it gets to the point where it can’t make millions of copies of itself, or if not it’s in a simulated universe as it intends to.
So what’s the right thing to do? Where should we be going with this?