Some specific object level disagreements with respect to “but it doesn’t seem to me to justify the second clause ‘implying that these instrumental values are likely to be pursued by many intelligent agents.’” would have been helpful. For example Luke could claim that “get lots of computational power” or “understand physics” is something of a convergent instrumental goal and Ben could say why he doesn’t think that’s true.
He could—if that was his position. However, AFAICS, that’s not what the debate is about. Everyone agrees that those are convergent instrumental goals—the issue is more whether machinines that we build are likely to follow them to the detriment of the surrounding humans—or be programmed to behave otherwise.
He could—if that was his position. However, AFAICS, that’s not what the debate is about. Everyone agrees that those are convergent instrumental goals—the issue is more whether machinines that we build are likely to follow them to the detriment of the surrounding humans—or be programmed to behave otherwise.
I see, that wasn’t very clear to me. I think giving some specific examples which exemplify the disagreement would have helped clarify that for me.