In theory, if humans and AIs aligned on their generative models (i.e., if there is methodological, scientific, and fact alignment), then goal alignment, even if sensible to talk about, will take care of itself: indeed, starting from the same “factual” beliefs, and using the same principles of epistemology, rationality, ethics, and science, people and AIs should in principle arrive at the same predictions and plans.
What about zero sum games? If you took took an agent, cloned it, then put both copies into a shared environment with only enough resources to support one agent, they would be forced to compete with one another. I guess they both have the same “goals” per se, but they are not aligned even though they are identical.
What about zero sum games? If you took took an agent, cloned it, then put both copies into a shared environment with only enough resources to support one agent, they would be forced to compete with one another. I guess they both have the same “goals” per se, but they are not aligned even though they are identical.
Yes, this is a valid caveat and the game theory perspective should have been better reflected in the post.