I think Matthew’s thesis can be summarized as “if you’re a scope-sensitive utilitarian perspective, AIs which are misaligned and seek power look similarly aligned in terms of their galactic resource utilization (or more aligned) with you as you are aligned with other humans”.
I agree that if the AI was aligned with you you would strictly prefer that.
I think Matthew’s thesis can be summarized as “if you’re a scope-sensitive utilitarian perspective, AIs which are misaligned and seek power look similarly aligned in terms of their galactic resource utilization (or more aligned) with you as you are aligned with other humans”.
I agree that if the AI was aligned with you you would strictly prefer that.