A paperclip maximizer won’t wirehead because it doesn’t value world states in which its goals have been satisfied, it values world states that have a lot of paperclips
I am not as confident as you that valuing worlds with lots of paperclips will continue once an AI goes from “kind of dumb AI” to “super-AI.” Basically, I’m saying that all values are instrumental values and that only mashing your “value met” button is terminal. We only switched over to talking about values to avoid some confusion about reward mechanisms.
A paperclip maximizer is an algorithm the output of which approximates whichever output leads to world states with the greatest expected number of paperclips. This is the template for maximizer-type AGIs in general.
This is a definition of paperclip maximizers. Once you try to examine how the algorithm works you’ll find that there must be some part which evaluates whether the AI is meeting it’s goals or not. This is the thing that actually determines how the AI will act. Getting a positive response from this module is what the AI is actually going for (is my contention). The actions that configure world states will only be relevant to the AI insofar as they trigger this positive response from this module. Since we already have infinitely able to self modify as a given in this scenario, why wouldn’t the AI just optimize for positive feedback? Why continue with paperclips?
I am not as confident as you that valuing worlds with lots of paperclips will continue once an AI goes from “kind of dumb AI” to “super-AI.” Basically, I’m saying that all values are instrumental values and that only mashing your “value met” button is terminal. We only switched over to talking about values to avoid some confusion about reward mechanisms.
This is a definition of paperclip maximizers. Once you try to examine how the algorithm works you’ll find that there must be some part which evaluates whether the AI is meeting it’s goals or not. This is the thing that actually determines how the AI will act. Getting a positive response from this module is what the AI is actually going for (is my contention). The actions that configure world states will only be relevant to the AI insofar as they trigger this positive response from this module. Since we already have infinitely able to self modify as a given in this scenario, why wouldn’t the AI just optimize for positive feedback? Why continue with paperclips?