My proposition—intelligence will only seek power. I approached this from “intelligence without a goal” angle, but if we started with “intelligence with a goal” we would come to the same conclusion (most of the logic is reusable). Don’t you think?
This part I would change
… But I argue that that’s not the conclusion the intelligence will make. Intelligence will think—it don’t have a preference now, but I might have it later, so I should choose actions that prepare me for the most possible preferences. Which is basically power seeking.
to
… But I argue that that’s not the conclusion the intelligence will make. Intelligence will think—I have a preference now, but I cannot be sure that my preference will be the same later (terminal goal can change), so I should choose actions that prepare me for the most possible preferences. Which is basically power seeking.
My proposition—intelligence will only seek power. I approached this from “intelligence without a goal” angle, but if we started with “intelligence with a goal” we would come to the same conclusion (most of the logic is reusable). Don’t you think?
This part I would change
to