Having a wide highly generalized alignment target is not a problem; it should be the goal. Many humans—to varying degrees—learn very generalized abstract large empathy circle alignment targets, such that they generally care about animals and (hypothetically) aliens and robots—I recently saw a video of a child crying for the dying leaves falling from trees.
Having a wide robust circle of empathy does not preclude also learning more detail models of other agents desires.
To start, there is a massive distributional difference between the utility functions of sim-humans and spiders.
Given how humans can generalize empathy to any sentient agent, I don’t see this as a fundamental problem, and anyway the intelligent spider civ would be making spider-sims regardless.
Having a wide highly generalized alignment target is not a problem; it should be the goal. Many humans—to varying degrees—learn very generalized abstract large empathy circle alignment targets, such that they generally care about animals and (hypothetically) aliens and robots—I recently saw a video of a child crying for the dying leaves falling from trees.
Having a wide robust circle of empathy does not preclude also learning more detail models of other agents desires.
Given how humans can generalize empathy to any sentient agent, I don’t see this as a fundamental problem, and anyway the intelligent spider civ would be making spider-sims regardless.