I don’t think it posits that the model has learned to wirehead—directly being motivated to maximize reward or being motivated by anything causally downstream of reward (like “more copies of myself” or “[insert long-term future goal that requires me being around to steer the world toward that goal]”) would work.
A lot of updates like this seem to push the model toward caring a lot about one of those two things (or some combo) and away from caring about the immediate rewards you were citing earlier as a reason it may not want to take over.
Ah, gotcha. This is definitely a convincing argument that models will learn to value things longer-term (with a lower discount rate), and I shouldn’t have used the phrase “short-term” there. I don’t yet think it’s a convincing argument that the long-term thing it will come to value won’t basically be the long-term version of “make humans smile more”, but you’ve helpfully left another comment on that point, so I’ll shift the discussion there.
Ah, gotcha. This is definitely a convincing argument that models will learn to value things longer-term (with a lower discount rate), and I shouldn’t have used the phrase “short-term” there. I don’t yet think it’s a convincing argument that the long-term thing it will come to value won’t basically be the long-term version of “make humans smile more”, but you’ve helpfully left another comment on that point, so I’ll shift the discussion there.