with a utility function the optimum of which does not contain any humans
Most theoretically possible UF ’s don’t contain humans, but that doesn’t mean that an AI see construct will have such a UF, because we are not taking a completely random shot into mindshare … We could not, even i we wanted to. That’s one of the persistent holes in the argument. (Another is the assumption that an AI will necessarily have a UF).
The argument doesn’t need summarising: it needs to be rendered valid by closing the gaps.
Yes, we would be even worse off if we randomly pulled out a superintelligent optimizer out of the space of all possible optimizers. That would, with almost absolute certainty, cause swift human extinction. The current techniques are somewhat better than taking a completely random shot in the dark. However, especially given point No.2, that can be of only very little comfort to us.
All optimizers have at least one utility function. At any given moment in time, an optimizer is behaving in accordance with some utility function. It might not be explicitly representing this utility function, it might not even be aware of the concept of utility functions at all—but at the end of the day, it is behaving in a certain way as opposed to another. It is moving the world towards a particular state, as opposed to another, and there is some utility function that has an optimum in precisely that state. In principle, any object at all can be modeled as having a utility function, even a rock.
Naturally, an optimizer can have not just one, but multiple utility functions. That makes the problem even worse, because then, all of those utility functions need to be aligned.
Most theoretically possible UF ’s don’t contain humans, but that doesn’t mean that an AI see construct will have such a UF, because we are not taking a completely random shot into mindshare … We could not, even i we wanted to. That’s one of the persistent holes in the argument. (Another is the assumption that an AI will necessarily have a UF).
The argument doesn’t need summarising: it needs to be rendered valid by closing the gaps.
Yes, we would be even worse off if we randomly pulled out a superintelligent optimizer out of the space of all possible optimizers. That would, with almost absolute certainty, cause swift human extinction. The current techniques are somewhat better than taking a completely random shot in the dark. However, especially given point No.2, that can be of only very little comfort to us.
All optimizers have at least one utility function. At any given moment in time, an optimizer is behaving in accordance with some utility function. It might not be explicitly representing this utility function, it might not even be aware of the concept of utility functions at all—but at the end of the day, it is behaving in a certain way as opposed to another. It is moving the world towards a particular state, as opposed to another, and there is some utility function that has an optimum in precisely that state. In principle, any object at all can be modeled as having a utility function, even a rock.
Naturally, an optimizer can have not just one, but multiple utility functions. That makes the problem even worse, because then, all of those utility functions need to be aligned.
That’s definitional. It doesn’t show there are any optmisers, that all AI’s are optimisers,etc.