I wish that we were clearer that in a lot of circumstances we don’t actually need a utility maximiser for our argument, but rather an AI that is optimising sufficiently hard. However, the reason people often look at utility maximisers is to see what the behaviour will look like in the limit. I think this is a sensible thing to do, so long as we remember that we are looking at behaviour in the limit.
Unfortunately, I suspect competitive dynamics and the unilateralist’s curse might push us further down this path than we’d like.
I wish that we were clearer that in a lot of circumstances we don’t actually need a utility maximiser for our argument, but rather an AI that is optimising sufficiently hard. However, the reason people often look at utility maximisers is to see what the behaviour will look like in the limit. I think this is a sensible thing to do, so long as we remember that we are looking at behaviour in the limit.
Unfortunately, I suspect competitive dynamics and the unilateralist’s curse might push us further down this path than we’d like.