Does this actually have some point, even as a wrong metaphor, or is it just a mathematically looking word salad? I am too tired to figure this out.
I will just note that if this worked, it would be an argument for the impossibility of alignment of anything, since the “anthropocentic” part does not play any role in the proof. So even if all we had in the universe were two paperclip maximizers, it would be impossible to create an AI aligned to them both… or something like that.
Does this actually have some point, even as a wrong metaphor, or is it just a mathematically looking word salad? I am too tired to figure this out.
I will just note that if this worked, it would be an argument for the impossibility of alignment of anything, since the “anthropocentic” part does not play any role in the proof. So even if all we had in the universe were two paperclip maximizers, it would be impossible to create an AI aligned to them both… or something like that.